Categories
Business Strategy Research Strategy Technology Technology Strategy

Beyond Build vs Buy: A Multi-Layer Technology Sourcing and Governance Decision Model under Strategic, Economic, and Uncertainty Constraints

Thesis Draft

Author: Yannick Huchard

Date: 29 April 2026

Discipline: Technology Strategy / Information Systems / Enterprise Architecture

Abstract

This thesis develops a technology sourcing and governance decision model that extends the classical build-versus-buy dilemma into a broader, more realistic decision space. In contemporary enterprises, technology leaders no longer choose only between internal development and external acquisition. They also decide whether to rent capabilities as services, partner for co-development, reuse existing assets, compose modular ecosystems, or accelerate implementation through AI-assisted engineering. Existing frameworks often remain binary, heuristic, or narrowly cost-centric, making them insufficient for digital, regulated, and innovation-intensive environments. This thesis proposes a multi-layer decision model that separates governance alternatives from implementation modes. Governance alternatives include Build, Buy, Rent, and Partner. Implementation modes include Reuse, Compose, Generate, and conventional custom development. The model evaluates alternatives across nine dimensions: strategic value, internal capability, externalization capability, technology maturity and accessibility, economic attractiveness, control/compliance/sovereignty, risk exposure, flexibility/reversibility, and option value under uncertainty. The thesis integrates transaction cost economics, the resource-based view, dynamic capabilities, real options reasoning, and multicriteria decision-making into one coherent framework. It argues that technology sourcing should be treated not merely as a procurement or engineering choice, but as a capital allocation decision over future capabilities, control positions, learning trajectories, and strategic optionality. In addition to the conceptual model, the thesis provides a scoring system, gating rules, measurement constructs, and an empirical validation design suitable for expert calibration and multi-case analysis.

Keywords: technology sourcing; build versus buy; IT outsourcing; governance; dynamic capabilities; real options; platform strategy; multicriteria decision-making.

Table of Contents

Chapter 1. Introduction

The build-versus-buy decision has long been one of the most persistent questions in technology management. In its classical form, the dilemma appears simple: should an organization develop a capability internally or obtain it through the market? That formulation remains useful as a starting point, yet it no longer captures the full reality of contemporary technology governance.

Modern enterprises operate in a decision space shaped by cloud platforms, modular ecosystems, software-as-a-service, platform-as-a-service, open-source infrastructure, strategic partnerships, managed services, and AI-assisted engineering. In this environment, leaders do not decide only whether to build or buy. They decide what they must control, what they should learn, what can remain reversible, what can be standardized, and what should become part of the organization’s enduring capability base.

The central argument of this thesis is that technology sourcing decisions should be understood as governance choices over future capability ownership, dependence, learning, optionality, and value capture. A sourcing decision does not only determine how a capability will be obtained. It determines what the firm will know, what it will govern, what it will outsource, what it can later change, and where it will sit in a broader ecosystem of technology providers and partners.

This thesis therefore proposes a multi-layer technology sourcing and governance decision model. The model extends the traditional binary by introducing four governance alternatives—Build, Buy, Rent, and Partner—and by treating Reuse, Compose, Generate, and Custom Develop as implementation modes that can exist across those governance forms. The resulting framework is intended to be both theoretically grounded and practically usable.

The work contributes in four main ways. First, it updates the conceptual vocabulary of sourcing. Second, it separates governance choice from implementation logic. Third, it makes flexibility and reversibility central rather than secondary. Fourth, it provides a research-ready operational structure capable of calibration, testing, and application in executive decision contexts.

1.1 Research Problem

Organizations increasingly make technology sourcing decisions under strategic pressure, regulatory constraint, technological turbulence, and uneven internal capability. Yet the frameworks used to guide those decisions often remain narrow, local, and heuristic. Many organizations still rely on informal judgments, isolated business cases, or cost-led reasoning that underweights control, risk, learning, and optionality.

The research problem addressed in this thesis is therefore the lack of an integrated, rigorous, and operationally usable model for selecting among modern technology sourcing and governance alternatives.

1.2 Research Question and Objectives

The core research question is: how should organizations evaluate and choose among Build, Buy, Rent, and Partner alternatives when sourcing technology capabilities under strategic, economic, regulatory, and uncertainty constraints?

  • reconceptualize build-versus-buy as a broader technology sourcing and governance problem;
  • distinguish governance alternatives from implementation modes;
  • identify the constructs that materially shape sourcing decisions;
  • operationalize those constructs into a scoring and decision model;
  • define a robust empirical validation pathway for the proposed model.

Chapter 2. Literature Review

2.1 Scope of the Review

This literature review draws from five major streams: make-or-buy and boundary-of-the-firm theory, information systems outsourcing, the resource-based view and dynamic capabilities, real options reasoning, and multicriteria decision-making. The purpose of the review is not merely to summarize these streams, but to show why none of them, in isolation, fully explains the modern sourcing decision.

The review demonstrates that technology sourcing is now a hybrid problem. It is simultaneously a governance problem, a strategic capability problem, a risk problem, a flexibility problem, and a decision-structuring problem. This multi-dimensionality explains why a more integrated model is necessary.

2.2 Historical Evolution of Make-or-Buy

The make-or-buy question originates in the broader economic problem of firm boundaries. Coase explained the existence of firms through the comparative costs of using markets versus organizing activities internally. Williamson later expanded this logic by emphasizing transaction costs, asset specificity, uncertainty, and governance hazards.

In operations and manufacturing, make-or-buy was associated with cost structures, quality control, supply dependence, and vertical integration. In strategic management, it became linked to the protection of core competences and the preservation of competitive advantage. In information systems, the problem evolved into decisions over outsourcing, application development, infrastructure management, and technology services.

What has changed most in recent decades is not simply the number of sourcing options, but the nature of what is being sourced. Organizations now source embedded expertise, operational maturity, innovation velocity, elasticity, compliance support, ecosystem access, and in some cases signaling value through association with technology leaders. This widening of the sourced object is central to the present thesis.

2.3 Information Systems Outsourcing

The information systems outsourcing literature provides the most direct foundation for this thesis. It has shown that outsourcing decisions are shaped not only by cost, but also by strategic importance, measurement difficulty, skill availability, governance quality, partnership structures, and outcome uncertainty.

Dibbern and colleagues demonstrated that IS outsourcing research had already become a complex, multi-stage field involving decision, transition, governance, and outcome phases. Later reviews by Lacity, Khan, Willcocks, and others further clarified that the field could not be explained by a single theory and that outsourcing outcomes depended strongly on client capability, vendor governance, and contextual factors.

This literature is especially important because it makes clear that externalization is not passive. Successful buying, renting, or partnering requires organizational ability to specify, select, negotiate, integrate, govern, monitor, and exit external arrangements. That insight directly informs the construct of Procurement and Externalization Capability in this thesis.

2.4 Transaction Cost Economics

Transaction cost economics remains one of the most influential foundations for sourcing research. It explains governance choices by comparing the hazards and costs associated with market exchange and internal coordination. Asset specificity, uncertainty, measurement difficulty, and opportunism are central variables.

In technology contexts, transaction cost economics is highly relevant whenever a capability is difficult to contract for completely, highly interdependent with internal processes, subject to future adaptation, or exposed to significant dependency risk. It helps explain why some capabilities should remain internal even when external markets appear efficient.

At the same time, transaction cost economics is limited when the sourcing decision is driven by learning, strategic differentiation, or long-term capability accumulation rather than purely governance efficiency. This is one reason why the present thesis integrates it with other perspectives.

2.5 Resource-Based View and Dynamic Capabilities

The resource-based view shifts attention from governance hazards to value creation. It suggests that firms should protect and cultivate resources and capabilities that are valuable, rare, difficult to imitate, and difficult to substitute. In technology management, this implies that some software, data, orchestration, and algorithmic capabilities should not be treated as ordinary procurement items.

Dynamic capabilities extend this perspective by emphasizing the capacity to sense opportunities, seize them, and reconfigure assets over time. This is highly relevant for sourcing because the attractiveness of Build, Buy, Rent, or Partner is rarely stable. Technology standards mature, providers change, regulatory conditions evolve, and internal capabilities improve or decay.

Together, these perspectives support two key ideas in the thesis: first, strategic value and knowledge accumulation matter materially to sourcing; second, sourcing decisions must include reassessment logic rather than assuming permanent optimality.

2.6 Real Options and Decision Structuring

Real options reasoning contributes the idea that flexibility has economic value. Under uncertainty, the ability to defer, stage, switch, expand, contract, or exit may be worth more than apparent short-term cost advantages. This logic is especially important in fast-moving domains such as AI, cloud services, and digital platforms.

The multicriteria decision-making literature contributes methodologically. It demonstrates that heterogeneous criteria can be structured, weighted, and compared systematically rather than left to informal executive judgment alone. Approaches such as AHP and TOPSIS are especially relevant because they make trade-offs explicit while still allowing hard constraints to override compensatory scoring.

Taken together, the literature points toward a sourcing model that must be multi-theoretical, multi-layered, and operationally explicit. That is the role of the model developed in this thesis.

Chapter 3. Theoretical Foundations and Research Gap

3.1 Integrated Theoretical Position

This thesis adopts an explicitly integrative theoretical position. Transaction cost economics explains governance hazards and dependency. The resource-based view explains strategic value and the protection of capability advantage. Dynamic capabilities explain reassessment, adaptation, and reconfiguration. Real options explain the value of reversibility and delayed commitment. Multicriteria decision-making explains how such dimensions can be structured into a usable decision artifact.

The research gap emerges from the fact that existing frameworks either remain too binary, too heuristic, or too fragmented across these theoretical traditions. Many models explain part of the sourcing problem well, but few integrate the full modern decision space in a structured way.

3.2 Identified Gaps

  • the dominant vocabulary remains too narrow: Build and Buy are no longer sufficient to describe the decision space;
  • governance choice and implementation logic are often conflated;
  • flexibility, reversibility, and option value remain under-modeled;
  • externalization capability is rarely treated as a strategic organizational capability in its own right;
  • dynamic reassessment is insufficiently represented in most sourcing frameworks.

The thesis responds to these gaps by proposing a multi-layer technology sourcing and governance decision model that is conceptually clear, empirically operationalizable, and directly relevant to executive technology governance.

Chapter 4. Conceptual Model Development

4.1 Model Architecture

The proposed model is structured in four layers: a pre-decision layer, a strategic qualification layer, an evaluation layer, and a decision-and-reassessment layer.

The pre-decision layer asks whether the problem requires a fresh sourcing decision at all. It applies a reuse-first logic and checks whether the requirement can be met through governed extension of existing assets.

The strategic qualification layer classifies the capability according to strategic profile and lifecycle profile. The evaluation layer scores the capability across the core constructs of the model. The decision layer applies gating rules, calculates alternative utilities, ranks governance alternatives, and specifies future reassessment triggers.

4.2 Governance Alternatives

Build

the organization develops and retains primary governance over the capability, including architecture, prioritization, and often operations.

Buy

the organization acquires a product or licensed solution as the core source of the capability.

Rent

the organization accesses the capability as a service, subscription, utility, or managed operational arrangement.

Partner

the organization co-develops, co-governs, or strategically aligns with an external actor in a more relational form than ordinary procurement.

4.3 Implementation Modes

Reuse

leveraging existing internal or governed external assets before creating or sourcing anew.

Compose

assembling the capability from modular services, APIs, or components.

Generate

using AI-assisted software engineering or agentic development to accelerate creation, integration, or adaptation.

Custom Develop

bespoke development without strong dependence on an existing governed base.

4.4 Strategic and Lifecycle Qualification

The model classifies capabilities by strategic profile: Commodity, Differentiating, Core, or Disruptive. Commodity capabilities typically bias toward Buy or Rent. Core and Disruptive capabilities increase the relative attractiveness of Build or carefully governed Partner arrangements.

Capabilities are also classified by lifecycle profile: Disposable, Transitional, or Enduring. Disposable capabilities may rationally privilege speed and optionality. Enduring capabilities require stronger attention to evolvability, control, and long-term operating logic.

4.5 Formal Propositions

P1. Higher Strategic Value increases the relative attractiveness of Build and strategically aligned Partner arrangements.

P2. Higher Internal Capability increases the feasibility and attractiveness of Build.

P3. Higher Procurement and Externalization Capability increases the attractiveness of Buy, Rent, and Partner.

P4. Higher Technology Maturity and Accessibility increases the attractiveness of Buy and Rent.

P5. Higher Control, Compliance, and Sovereignty requirements increase the attractiveness of Build and constrained Partner structures.

P6. Higher Risk Exposure decreases the attractiveness of the affected option.

P7. Higher Flexibility and Reversibility increase the attractiveness of Rent and many Partner arrangements.

P8. Higher Option Value under uncertainty increases the attractiveness of Rent and, in some cases, Partner.

P9. Greater Reuse Availability reduces the likelihood of Build-from-scratch across all governance forms.

P10. The relationship between Strategic Value and preferred governance alternative is moderated by Internal Capability, Technology Maturity, and Control requirements.

Chapter 5. Construct Definitions and Operationalization

This chapter formalizes the constructs that support the model. The intent is to make the framework research-ready and decision-ready. Not all variables are treated as reflective latent constructs. Several are better modeled as composite indices built from heterogeneous but decision-relevant components.

Strategic Value

Definition: the extent to which a capability contributes to competitive advantage, differentiation, innovation potential, and valuable knowledge accumulation.

Indicative dimensions: Core Strategic Relevance; Differentiation Contribution; Innovation Potential; Knowledge Accumulation Value.

Expected directional effect: higher levels generally strengthen the attractiveness of Build and Partner.

Internal Capability

Definition: the organization’s ability to design, build, deliver, govern, and operate the capability internally over time.

Indicative dimensions: Engineering Expertise; Domain Expertise; Delivery Maturity; Human Bandwidth; Operational Capability.

Expected directional effect: higher levels generally strengthen the attractiveness of Build.

Procurement and Externalization Capability

Definition: the organization’s ability to identify, assess, contract, integrate, govern, monitor, and exit external sourcing arrangements.

Indicative dimensions: RFP Maturity; Contracting Capability; Vendor Governance; SLA Management; Integration Capability; Exit Management Capability.

Expected directional effect: higher levels generally strengthen the attractiveness of Buy, Rent, and Partner.

Technology Maturity and Accessibility

Definition: the degree to which the technology domain is standardized, tooled, supported, and practically accessible.

Indicative dimensions: Standards Maturity; Tooling Maturity; Talent Availability; Ecosystem Maturity; Technology Accessibility.

Expected directional effect: higher levels generally strengthen the attractiveness of Buy and Rent, and secondarily Build feasibility.

Economic Attractiveness

Definition: the overall economic desirability of an alternative across initial cost, medium-term cost, change cost, and time-to-value.

Indicative dimensions: Initial Cost Attractiveness; TCO Attractiveness; Cost of Evolution Attractiveness; Time-to-Value Attractiveness.

Expected directional effect: higher levels generally strengthen the attractiveness of Context dependent.

Control, Compliance, and Sovereignty

Definition: the degree to which the capability requires internal authority over roadmap, operations, data, compliance execution, and jurisdictional assurance.

Indicative dimensions: Data Sensitivity; Regulatory Burden; Roadmap Control Need; Operational Control Need; Sovereignty Requirement.

Expected directional effect: higher levels generally strengthen the attractiveness of Build and constrained Partner.

Risk Exposure

Definition: the level of vulnerability associated with a governance option, including vendor, geopolitical, cyber, legal, dependency, and immaturity risk.

Indicative dimensions: Vendor Fragility; Geopolitical Risk; Security Risk; Legal/Compliance Risk; Dependency Risk; Immaturity/Beta Risk.

Expected directional effect: higher levels generally strengthen the attractiveness of Penalizes the exposed option.

Flexibility and Reversibility

Definition: the degree to which the arrangement allows adaptation, switching, scaling, exit, or reconfiguration without excessive disruption.

Indicative dimensions: Ease of Exit; Portability; Modularity; Switching Cost Attractiveness; Scalability Elasticity; Update Velocity.

Expected directional effect: higher levels generally strengthen the attractiveness of Rent and Partner.

Option Value

Definition: the strategic and economic value of preserving future choice under uncertainty.

Indicative dimensions: Technology Uncertainty; Demand Uncertainty; Innovation Velocity; Exit/Change Strategic Value.

Expected directional effect: higher levels generally strengthen the attractiveness of Rent and selected Partner arrangements.

Collaboration Fit

Definition: the extent to which a co-development or co-governance arrangement is strategically, operationally, and relationally viable.

Indicative dimensions: Co-development Willingness; Strategic Alignment; Shared Governance Feasibility; Acceptability of Value and IP Sharing.

Expected directional effect: higher levels generally strengthen the attractiveness of Partner.

5.1 Measurement and Normalization

Judgment-based items are measured on a 1–7 Likert-type scale and normalized to a 0–100 scale. Objective variables such as cost or expected implementation duration can also be normalized to 0–100 through a desirability or min–max transformation.

The generic normalization formula for 1–7 items is shown below.

xₙ = ((x − 1) / 6) × 100

Dimension scores are calculated as arithmetic means unless a context-specific weighting of sub-dimensions is explicitly justified.

5.2 Contextual Variables

The model also includes contextual variables that influence interpretation and gating: Strategic Profile, Lifecycle Profile, Time Criticality, Mandatory Sovereignty Constraint, Reuse Availability, Trigger Exposure, and Brand Leverage Potential.

Chapter 6. Scoring Model and Decision Logic

6.1 Pre-Decision Rule

The model begins with a mandatory principle: Reuse before Try before Build, Buy, Rent, or Partner. If a reusable governed asset already satisfies the need above a defined threshold, the decision should be reframed around reuse or extension before greenfield sourcing is considered.

6.2 Gating Rules

  • If sovereignty is mandatory and an external arrangement cannot satisfy jurisdictional requirements, the incompatible Rent or Buy option is disqualified.
  • If internal capability is below a minimum threshold and time criticality is extreme, Build may be disqualified.
  • If vendor fragility, geopolitical exposure, or compliance incompatibility exceed tolerance, the affected external option is disqualified.
  • If lifecycle intent is disposable and strategic value is low, heavy Build should generally be discouraged unless explicit learning value is high.

6.3 Illustrative Utility Logic

Build Utility

U(Build) = 0.24·SVS + 0.22·ICS + 0.16·CCSS + 0.10·TMAS + 0.08·FRS + 0.10·(100−PECS) + 0.10·(100−EASext)

Buy Utility

U(Buy) = 0.18·(100−SVS) + 0.18·PECS + 0.15·TMAS + 0.20·EAS + 0.10·(100−CCSS) + 0.10·(100−RES) + 0.09·FRS

Rent Utility

U(Rent) = 0.14·PECS + 0.14·TMAS + 0.16·EAS + 0.18·FRS + 0.18·OVS + 0.10·(100−CCSS) + 0.10·(100−RES)

Partner Utility

U(Partner) = 0.20·SVS + 0.10·ICS + 0.18·PECS + 0.12·TMAS + 0.10·EAS + 0.12·FRS + 0.18·CFS

These utility functions are thesis-draft formulas intended for calibration rather than universal fixed weights. In a full empirical phase, weights should be calibrated through AHP or a comparable structured expert procedure.

6.4 Decision Outputs

  • primary recommendation;
  • secondary recommendation;
  • disqualified alternatives and reasons;
  • dominant decision drivers;
  • confidence score based on the gap between the top two options;
  • robustness score based on sensitivity analysis;
  • review interval and reassessment triggers.

Chapter 7. Research Methodology

This thesis adopts a pragmatic, abductive, and design-oriented methodology. It combines conceptual theory building with the design of a decision artifact and a future empirical validation pathway. A purely positivist approach would be too narrow because several constructs depend on structured executive judgment. A purely interpretivist approach would be insufficient because the thesis aims to produce a formalizable, reproducible decision model.

The methodology is therefore hybrid. It starts from theory, learns from practical decision realities, and culminates in an artifact capable of calibration and testing. This is appropriate for a problem that is simultaneously explanatory, normative, and decision-support oriented.

7.1 Research Design

  • Phase 1: conceptual development of the model from literature synthesis and structured practitioner reasoning;
  • Phase 2: construct refinement and content validation through expert review and Delphi-style convergence;
  • Phase 3: weight calibration using AHP and structured pairwise comparison;
  • Phase 4: empirical application and validation using retrospective and comparative sourcing cases.

7.2 Unit of Analysis

The primary unit of analysis is the technology capability sourcing decision. This may be a customer identity platform, fraud engine, claims workflow, reporting engine, pricing engine, AI assistant, integration layer, or other bounded capability. This unit is appropriate because sourcing decisions are often made at capability level and may differ materially within the same organization.

7.3 Measurement Design

The model combines objective indicators, structured ratings, composite indices, and contextual qualifiers. Content validity is strengthened through literature grounding and expert review. Because multiple evaluators may apply the model to the same case, inter-rater reliability and evaluator training are important.

7.4 Validation Logic

A full empirical validation should include content validity, face validity, inter-rater consistency, sensitivity analysis, and multiple-case comparison. The model’s recommendation should be compared not only with historical decisions, but also with observed post-decision outcomes, because a mismatch may indicate that the original organizational choice was itself weak.

Chapter 8. Empirical Validation Design

The empirical validation design is phased and cumulative. It begins with expert validation of constructs and decision logic, proceeds to weight calibration, and culminates in case-based application and cross-case comparison.

The purpose of validation is not only to confirm plausibility, but to assess whether the model can be understood, applied consistently, calibrated transparently, and used meaningfully in real sourcing contexts.

8.1 Expert Validation

A panel of CTOs, CIOs, enterprise architects, procurement leaders, risk officers, and senior delivery or engineering leaders should assess construct completeness, terminology clarity, missing dimensions, and gating logic.

8.2 AHP Weight Calibration

Pairwise comparisons across the major criteria should be used to derive context-aware weights and consistency ratios. Subgroup comparison may reveal meaningful differences between technology, procurement, and risk leadership populations.

8.3 Case-Based Validation

The model should be applied to a theoretically sampled set of historical sourcing cases covering successful and unsuccessful Build, Buy, Rent, and Partner decisions. Each case should document the capability, context, alternatives considered, actual choice, and observed outcomes.

8.4 Evaluation Metrics

  • recommendation plausibility;
  • alignment with historical decision;
  • consistency with observed outcomes;
  • inter-rater reliability;
  • robustness under changed weights or uncertain inputs;
  • managerial usefulness in governance discussions.

Chapter 9. Discussion

This thesis suggests that technology sourcing should be treated as a structured decision over future capability ownership, control, adaptability, and learning. It is not simply an argument about cost minimization or engineering preference.

The model also highlights that sourcing decisions are path-dependent. Every Build decision develops internal muscles. Every Buy or Rent decision develops—or fails to develop—externalization muscles such as procurement maturity, vendor governance, and integration discipline. Every Partner decision shapes the firm’s relationship to ecosystem dependence, collaborative learning, and shared control.

From this perspective, sourcing is a portfolio discipline. It requires explicit choices about what the firm wants to own intellectually, what it wants to own operationally, what it wants to accelerate externally, and what it wants to keep reversible.

Chapter 10. Contributions

10.1 Theoretical Contributions

The thesis extends classical boundary-of-the-firm reasoning into a contemporary technology governance context by formalizing Build, Buy, Rent, and Partner as distinct governance alternatives. It also integrates transaction cost economics, the resource-based view, dynamic capabilities, real options, and multicriteria decision-making into one coherent decision architecture.

A further theoretical contribution is the reframing of sourcing as capability capital allocation. Each sourcing decision is treated as an investment in future knowledge, control, dependence, optionality, and organizational muscle formation rather than merely as a choice of delivery mechanism.

10.2 Conceptual Contributions

One of the strongest conceptual contributions is the explicit separation between governance alternatives and implementation modes. This distinction resolves a persistent confusion in both practice and prior frameworks.

The thesis also contributes a layered decision logic that combines reuse-first reasoning, strategic qualification, construct-based evaluation, non-compensatory gating, utility scoring, and dynamic reassessment.

10.3 Methodological and Managerial Contributions

The thesis translates a strategic managerial debate into a research-ready model with formal constructs, scoring logic, utility functions, decision outputs, and validation pathways.

For practitioners, it offers a more mature executive language for technology investment. It recognizes that buying and renting also require capability, and it gives organizations a structure for governing sourcing decisions with greater transparency and repeatability.

Chapter 11. Limitations

The model is intentionally broad, which increases relevance but also creates abstraction. A framework capable of spanning software platforms, cloud services, ecosystems, and AI-assisted delivery cannot capture every domain-specific nuance without contextual tailoring.

The thesis also integrates multiple theoretical traditions that do not share identical assumptions. This is a strength from a design perspective, but it means the work contributes more to theory integration and decision architecture than to isolated extension of any single theory.

Several constructs remain partly judgment-based. Strategic Value, Collaboration Fit, and Option Value can be structured rigorously, but they cannot be reduced to purely objective measures. Weighting also remains context-sensitive, which means calibration matters materially.

Finally, the manuscript provides a strong empirical validation design but not yet the full executed validation at scale. The thesis is therefore conceptually strong and methodologically ready, but its predictive and comparative performance still requires systematic empirical testing.

Chapter 12. Conclusion

This thesis began from the observation that the classical build-versus-buy framing no longer captures the full reality of contemporary technology sourcing. Modern organizations operate in a richer and more volatile decision space shaped by platform ecosystems, service models, modular architecture, regulation, and AI-assisted software creation.

The thesis responded by proposing a multi-layer technology sourcing and governance decision model that distinguishes governance alternatives from implementation modes, embeds sourcing decisions in capability and control logic, and treats flexibility and optionality as central rather than peripheral variables.

The work has argued that sourcing decisions should be understood as decisions over strategic possibility. A Build decision may create long-term control and learning. A Buy decision may secure speed and embedded expertise. A Rent decision may preserve elasticity and option value. A Partner decision may accelerate innovation while sharing risk and governance. None of these options is inherently superior in the abstract; each becomes more or less attractive depending on strategic value, capability maturity, uncertainty, control need, and risk.

The enduring contribution of the thesis is therefore not only the model itself. It is the invitation to treat technology sourcing as a serious discipline of strategic governance under uncertainty.

References

Alaghehband, F. K., Rivard, S., Wu, S., & Goyette, S. (2011). An assessment of the use of transaction cost theory in information technology outsourcing. The Journal of Strategic Information Systems, 20(2), 125–138.

Aubert, B. A., Rivard, S., & Patry, M. (1996). A transaction cost approach to outsourcing behavior. Information & Management, 30(2), 51–64.

Aubert, B. A., Rivard, S., & Patry, M. (2004). A transaction cost model of IT outsourcing. Information & Management, 41(7), 921–932.

Balliauw, M., Kort, P. M., & Zhang, A. (2021). From theoretical real options to practical decision support: An analysis of the real options approach and its pitfalls for decision-makers. Journal of Economic Dynamics and Control, 129, 104172.

Barney, J. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17(1), 99–120.

Bharadwaj, A. S. (2000). A resource-based perspective on information technology capability and firm performance: An empirical investigation. MIS Quarterly, 24(1), 169–196.

Brewer, B., Ashenbaum, B., & Ogden, J. (2014). Outsourcing the procurement function: Do actions and results align with theory? Journal of Purchasing and Supply Management, 20(2), 94–104.

Coase, R. H. (1937). The nature of the firm. Economica, 4(16), 386–405.

De Giovanni, P. (2018). Capacity investment under uncertainty: The effect of volume flexibility and the value of real options. International Journal of Production Economics, 198, 105–119.

Dibbern, J., Goles, T., Hirschheim, R., & Jayatilaka, B. (2004). Information systems outsourcing: A survey and analysis of the literature. The DATA BASE for Advances in Information Systems, 35(4), 6–102.

Eisenhardt, K. M., & Martin, J. A. (2000). Dynamic capabilities: What are they? Strategic Management Journal, 21(10–11), 1105–1121.

Espino-Rodríguez, T. F., & Padrón-Robaina, V. (2006). A review of outsourcing from the resource-based view of the firm. International Journal of Management Reviews, 8(1), 49–70.

Folta, T. B. (1998). Governance and uncertainty: The trade-off between administrative control and commitment. Strategic Management Journal, 19(11), 1007–1028.

Grover, V., Cheon, M. J., & Teng, J. T. C. (1996). The effect of service quality and partnership on the outsourcing of information systems functions. Journal of Management Information Systems, 12(4), 89–116.

Henderson, J. C., & Venkatraman, N. (1993). Strategic alignment: Leveraging information technology for transforming organizations. IBM Systems Journal, 32(1), 4–16.

Kahraman, C., Engin, O., Kabak, Ö., & Kaya, İ. (2009). Information systems outsourcing decisions using a group decision-making approach. Engineering Applications of Artificial Intelligence, 22(6), 832–841.

Lacity, M. C., Khan, S. A., & Willcocks, L. P. (2009). A review of the IT outsourcing literature: Insights for practice. The Journal of Strategic Information Systems, 18(3), 130–146.

Lacity, M. C., Khan, S. A., Yan, A., & Willcocks, L. P. (2010). A review of the IT outsourcing empirical literature and future research directions. Journal of Information Technology, 25(4), 395–433.

Lacity, M. C., Khan, S. A., Yan, A., & Willcocks, L. P. (2011). Towards an endogenous theory of information technology outsourcing. The Journal of Strategic Information Systems, 20(2), 139–157.

McFarlan, F. W. (1981). Portfolio approach to information systems. Harvard Business Review, 59(5), 142–150.

McIvor, R. (2009). How the transaction cost and resource-based theories of the firm inform outsourcing evaluation. Journal of Operations Management, 27(1), 45–63.

Mikalef, P., & Pateli, A. (2017). Information technology-enabled dynamic capabilities and their indirect effect on competitive performance. Journal of Business Research, 70, 1–16.

Porter, M. E. (1985). Competitive advantage: Creating and sustaining superior performance. Free Press.

Prahalad, C. K., & Hamel, G. (1990). The core competence of the corporation. Harvard Business Review, 68(3), 79–91.

Saaty, T. L. (1980). The analytic hierarchy process. McGraw-Hill.

Sambamurthy, V., Bharadwaj, A., & Grover, V. (2003). Shaping agility through digital options: Reconceptualizing the role of information technology in contemporary firms. MIS Quarterly, 27(2), 237–263.

Teece, D. J. (2007). Explicating dynamic capabilities: The nature and microfoundations of sustainable enterprise performance. Strategic Management Journal, 28(13), 1319–1350.

Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18(7), 509–533.

Tiwana, A. (2014). Platform ecosystems: Aligning architecture, governance, and strategy. Morgan Kaufmann.

Watjatrakul, B. (2005). Determinants of IS sourcing decisions: A comparative study of transaction cost theory versus the resource-based view. The Journal of Strategic Information Systems, 14(4), 389–415.

Wilden, R., Gudergan, S. P., Nielsen, B. B., & Lings, I. (2013). Dynamic capabilities and performance: Strategy, structure and environment. Long Range Planning, 46(1–2), 72–96.

Williamson, O. E. (1975). Markets and hierarchies: Analysis and antitrust implications. Free Press.

Williamson, O. E. (1985). The economic institutions of capitalism. Free Press.

Williamson, O. E. (1991). Comparative economic organization: The analysis of discrete structural alternatives. Administrative Science Quarterly, 36(2), 269–296.

Appendices

Appendix A. Example Measurement Instrument

Strategic Value

Rate each item from 1 (very low) to 7 (very high):

  • This capability is central to our long-term strategic positioning.
  • This capability contributes materially to market differentiation.
  • This capability may enable future innovation or disruption.
  • Internal control over this capability would generate valuable learning.

Internal Capability

Rate each item from 1 (very low) to 7 (very high):

  • We possess the engineering expertise required for this capability.
  • We have the business/domain expertise required.
  • Our teams are mature enough to deliver this capability reliably.
  • We have sufficient human bandwidth to absorb this work.
  • We can operate and evolve this capability after delivery.

Procurement and Externalization Capability

Rate each item from 1 (very low) to 7 (very high):

  • We can compare external solutions rigorously.
  • We can negotiate robust legal and commercial terms.
  • We can govern external providers effectively over time.
  • We can integrate external capabilities effectively.
  • We can prepare and execute an exit if needed.

Appendix B. Decision Output Template

  • Capability name and scope
  • Strategic profile and lifecycle profile
  • Reuse opportunities identified
  • Disqualified alternatives and reasons
  • Utility scores for remaining alternatives
  • Primary and secondary recommendation
  • Key decision drivers and key risks
  • Confidence and robustness indicators
  • Review interval and reassessment triggers

Appendix C. AHP and Case Validation Templates

TODO: The full empirical version of this thesis should include structured AHP pairwise comparison sheets, Delphi round prompts, case reconstruction templates, evaluator guidance, and sensitivity analysis worksheets.

Categories
Anthropic Artificial Intelligence Business Business Strategy EU AI Act Innovation Microsoft OpenAI Strategy

The Regulatory Advantage: Why Anthropic Is Gaining Ground on OpenAI in the European Enterprise Race

This article offers a complementary perspective to Episode 240 of the Moonshots podcast, a series I highly recommend for its depth on the impacts of technology and AI.

It builds on their discussion of the strategic battle between Anthropic and OpenAI to capture the enterprise space, bringing a distinctly European lens to the conversation.

The discourse surrounding Artificial Intelligence often centers on raw compute power, the size of parameters, and the race toward Artificial General Intelligence (AGI). However, as the industry matures, a significant strategic divide has emerged, one that was highlighted in the mid-March “Pivot” episode and is becoming increasingly visible across the Atlantic. While Sam Altman and OpenAI have doubled down on a “scale compute, distribution, and capital” bet aimed primarily at the end-user consumer, Anthropic has quietly but deliberately focused on the institutional and enterprise sectors.

In the high-stakes theater of the European market, this strategic divergence is proving to be the deciding factor. From a European perspective, the race is no longer just about who has the fastest model, but who can navigate the complex interplay of regulation, trust, and business stability. In this environment, Anthropic is structurally better positioned for Europe’s regulatory-heavy enterprise landscape, and that advantage is starting to materialize.

The European Paradox: Regulation as a Business Variable

To understand why the OpenAI-Anthropic rivalry is playing out differently in Europe, one must first understand the European business psyche. Europe is famously known for its stringent regulatory environment. This is not accidental; it stems from a profound social background where capitalism is balanced by a desire to invite everyone to contribute to the economy. It is a form of capitalism, certainly, but one that lacks the “move fast and break things” aggression often found in the United States.

In this context, the regulatory framework is not merely a hurdle to be cleared; it is a fundamental part of the business equation. In Europe, having a strong regulatory posture is a core part of a company’s commercial value proposition. It serves as an “entry check” for doing business. This is particularly true in software-heavy industries such as finance, telecommunications, retail, and cybersecurity.

While the regulatory burden may be slightly lighter in retail or telco, it is the absolute bedrock of the finance sector. In finance, being “regulatory-friendly” is a massive business advantage. It is the currency of trust. Furthermore, compliance serves as a “passport.” In a fragmented continent, being compliant in a rigorous jurisdiction like Luxembourg or Switzerland provides a level of credibility that facilitates expansion into other European markets. Anthropic’s early bet on “Institutional AI” aligned perfectly with this reality, while OpenAI’s consumer-centric approach initially ignored these nuances.

The Tale of Two Strategies: Consumer Scale vs. Institutional Safety

Sam Altman’s gamble on scale compute for the end-consumer was bold, but from an enterprise perspective, it may have been a miscalculation. OpenAI’s initial success with ChatGPT made it the de facto go-to solution, primarily because Anthropic’s models were initially not connected to the internet—a significant hurdle for early adoption.

However, as the dust settled, the inherent strengths of Anthropic’s suite (Sonnet, Haiku, and Opus) began to shine in the corporate world. Anthropic bet on models that were specifically engineered to be business-friendly, particularly in their ability to manipulate and manage complex business documents.

In the banking sector, for example, the narrative and articulation of content often matter as much as the data itself. Experience has shown that Anthropic models frequently outperform OpenAI in handling legal texts and marketing narratives. The way Anthropic models structure content is often more accurate and better articulated for professional standards. While OpenAI’s GPT models were distributed through Microsoft Azure, providing them a massive head start in distribution, the underlying “business logic” of the Anthropic models began to pull ahead in specialized use cases like legal review and presentation development.

The Stability Factor and the “OpenAI Drama”

Europeans value stability. This preference for the predictable over the volatile became a significant liability for OpenAI during the corporate upheaval involving Sam Altman’s temporary ousting. From a risk perspective, this drama was a red flag for European stakeholders. As a young company, OpenAI’s internal stability could no longer be guaranteed, and for a Chief Risk Officer (CRO) or a Compliance Officer in a European bank, that instability is a deal-breaker.

Anthropic, by contrast, has demonstrated a steady, “safety-first” posture since its inception. This stability has become a cornerstone of their reputation. Even Microsoft, OpenAI’s primary benefactor, recognized this. Satya Nadella’s move to diversify Microsoft’s portfolio by integrating more capable Anthropic models was a masterful stroke of risk hedging.

Microsoft understood that the regulatory space was a strong advantage. They have been winning in Europe by being the bridge between cutting-edge AI and compliance frameworks like SOC 2 Type 2, GDPR, the EU AI Act, DORA, and Schremms II. By providing reference compliant frameworks and working closely with the “Big Four” auditing firms, Microsoft caters to the auditors and compliance officers who view AI not just through an IT lens, but through a risk and architecture lens. In this ecosystem, Anthropic’s models, which are perceived as more stable and “safe for work” (SFW), fit the Microsoft enterprise narrative better than the increasingly unpredictable OpenAI.

The Engineering Scarcity and the Coding Pivot

Another critical front in this battle is the developer experience. In Europe, the landscape for IT talent is distinct. We do not have the same culture of “rockstar engineers” as the US (i.e. mid-level US IT engineers earn +50% income compared to EU IT engineers, meanwhile top 10% US Engineers are in the 2x-4x range compared to a EU IT engineer), but the cost of IT expertise is exceptionally high due to social security structures and a general scarcity of talent. While outsourcing is common, it has often proven less effective than native, local development due to the complexities of the European environment.

Europe is not a monolith; it is a plethora of languages, cultures, and business domains. Managing an offshore team to understand the cultural nuance of a French retail chain or a German engineering firm is fraught with difficulty. This has created a massive demand for “AI-augmented engineering” and AI coding agents.

Anthropic recognized this early and focused heavily on their coding models. By creating a regulatory-friendly IT engineering platform, they have built deep trust within the developer community. This trust has permeated upwards from the developer layer to the IT management layer, and finally to the business management layer. Today, Anthropic holds the reputation for providing the best models for high-stakes coding and business articulation, while remaining firmly within the “regulatory-friendly” camp. Furthermore, Claude Code is winning the developer mindshare, pulling ahead of Codex and leaving (Microsoft) GitHub Copilot trailing behind.

Geopolitics and the Military Divergence

The gap between these two giants is further widening due to geopolitical considerations. Recently, OpenAI has become more deeply embedded in the US military and defense business. While this is a standard trajectory for many US tech giants, it creates a friction point in the European market.

For certain European industries (finance in particular) deep ties to the US military-industrial complex can be perceived as an incompatibility. The combination of OpenAI’s shift toward defense, their history of internal instability, and the general risk associated with US-centric technology in an era of digital sovereignty has made European firms cautious. Anthropic’s positioning as a “safety” company, focused on institutional reliability rather than consumer or military dominance, offers a much more palatable alternative for the European C-suite.

The Local Challenger: The Rise of Mistral

It would be a mistake to view this as a two-horse race. Europe has its own champion in Mistral. As the continent looks to balance innovation with sovereignty, we are likely to see a “best of breed” approach in the European stack. This will likely manifest as a combination of Mistral and Anthropic taking the lead, with OpenAI relegated to a secondary position depending on how future regulations impact decision-making.

The propagation of AI innovation in Europe is naturally slower than in the US because it is driven by a “cloud computing first” mindset. We are currently in a ramping-up phase where the priority is a combination of secured cloud platforms and compliant AI models.

Conclusion: The New Business Driver

In summary, the “wrong bet” on consumer scale and the subsequent internal volatility have left an opening in the European market; one that Anthropic has moved to fill with surgical precision. By treating regulation not as a constraint but as a business driver, Anthropic has aligned itself with the fundamental requirements of the European economy.

While OpenAI and Google (with its own massive hyperscaler ecosystem) will continue to hold significant market share, the momentum in Europe has shifted. Anthropic’s focus on business accuracy, legal articulation, and a stable, “regulatory-friendly” posture has made them the preferred partner for the next generation of European enterprise AI. In the halls of European finance and the offices of the Big Four, the verdict is becoming clear: safety, stability, and compliance are the true engines of AI adoption. In this race, the turtle of institutional safety is currently outrunning the hare of consumer scale.

Yannick HUCHARD

Categories
Agents Aleph Alpha Artificial Intelligence Automation Business Strategy ChatGPT EU AI Act GPT4o Innovation Meta Microsoft Robots Sovereignty Strategy Technology Technology Strategy

The Human Moat: Riding the Delta (Δ) in the Great AI Rearchitecture

What you are about to read might be the most unsettling—and necessary—thing you read about your career this year. It cuts against the grain of simplified narratives and offers a dose of reality about the monumental economic transformation we are entering. This 6th episode (of the “Navigating the Future with AI” series) is not just another article about AI. It is your personal GPS for navigating the Great Rearchitecture. Within it is a detailed plan designed to demystify what is truly happening, helping you to navigate the coming challenges while seizing the profound opportunities they create. It is your blueprint for moving from a position of uncertainty to one of relevance and power in the post-AI economy.

Business & Tech leaders, economists, and thinkers are all forecasting a worldwide shift, and the ground is already trembling. The common fear is one of simple replacement—that millions of workers will be made redundant by a new wave of artificial intelligence. While this fear is understandable, it misinterprets the present danger. The story is far more complex and has already begun.

The Great Reallocation of Capital: Understanding the Self-Fulfilling Prophecy

The Great Rearchitecture that is reshaping our professional world isn’t happening in a vacuum. It is being driven by a powerful, underlying financial current: The Great Reallocation of Capital.

At its core, this reallocation stems from a fundamental choice I outlined in the first article of this “Navigating the Future” series (Digital Augmentation). Does a leader use AI as a manpower divider—achieving the same output with fewer people—or as a productivity multiplier, using the same workforce to accomplish vastly more? The layoffs we are witnessing suggest many are choosing the former.

AI Divider or Multiplier

It’s a strategic crossroads where we see leaders diverging. The current wave of layoffs suggests many are choosing the former. However, a few forward-thinking leaders are charting the alternative path. A prime example is Shopify CEO Tobias Lütke, who, in a widely circulated memo, instructed his company to restrain hiring and instead embrace a new default: every employee must first exhaust AI as a solution before new headcount is considered. This is the productivity multiplier in action: transforming their own jobs to increase their capabilities and, by extension, the company’s.

And yet, this choice often ignores a fundamental truth I have observed in every organization I have worked with: there are no empty backlogs. There is always 10x more work to be done than the current team can handle, with ambitions that would require 100x the effort. A substantial reservoir of potential value lies untapped.

Just consider the functions often treated as cost centers—quality assurance, cybersecurity, compliance, and even employee wellness. With AI as a multiplier, these can be transformed into powerful market differentiators. A company’s decision here reveals its true vision: a defensive focus on short-term cost-cutting versus an ambitious pursuit of long-term value creation.

You have seen the headlines. Microsoft, IBM, Amazon, Salesforce, and Meta have all made significant cuts to their workforce. But the reduction is not, as many assume, primarily because AI is already there to replace workers like engineers, designers, marketers, HR, compliance specialists, and, proportionally, managers. The reality is that these layoffs are an anticipation of AI’s future power.

We are witnessing a strategic, system-wide efficiency exercise. Corporations are trimming their largest operational expenditure—salaries and their associated costs—to amass immense war chests of capital. This capital is being funneled directly into the single biggest prize in modern history: the development and deployment of Artificial General Intelligence (AGI) and, eventually, Superintelligence. It is a frantic race, and whoever gets there first will win the game.

The contenders are clear: Google, leveraging decades of research from DeepMind and its powerful Gemini models; Meta, pushing the open-source frontier with Llama 4 and its JEPA world models; Elon Musk’s xAI and its unfiltered Grok; Anthropic’s safety-conscious Claude; and the colossal cloud platforms of Amazon and Microsoft. Underpinning this entire revolution is NVIDIA, the undisputed kingmaker providing the very infrastructure of inference with its GPUs. This is not, however, merely a Silicon Valley affair; it is a key battlefield in the techno-geopolitical power balance. China is rapidly closing the gap with formidable open-source contenders like DeepSeek‘s V3 reasoning models, Alibaba’s versatile Qwen family, and the surprise emergence of Moonshot AI’s Kimi K2, an exceptionally powerful agentic model. Meanwhile, Europe is striving for technological sovereignty with champions like France’s Mistral AI, which has gained significant traction by offering a powerful, open-weight alternative, followed by Aleph Alpha in Germany. This fierce global cycle of investment and innovation creates an unavoidable truth: intelligence itself is becoming a manufactured resource, destined to become hyper-reliable for executing complex tasks. And the disruption is not limited to knowledge work; Amazon’s deep investment in robotics signals a parallel transformation for physical labor.

This high-speed revolution, however, is largely a Big Tech phenomenon. The other 99% of the economy is not there yet. For most companies, the reality is far more challenging. This isn’t theoretical. In my own journey leading AI adoption in the banking sector—an industry I know very well—I witnessed the immense difficulty firsthand. It took a full year of relentless effort, starting with stemming the foundations of our AI-driven transformation from the Technology Office—aligning our most powerful change engines of Enterprise Architecture, Engineering, and Innovation—while simultaneously using the momentum from public AI discussions to help secure buy-in, engaging with the local tech ecosystem, and rallying a great team of curious, knowledgeable, and innovative people to push in the same direction and prove the value. And what I consistently see, whether in discussions with global consulting firms, specialized service providers, or businesses large and small, is a recurring, critical gap. And what I consistently see—whether in discussions with global consulting firms, specialized service providers, or businesses large and small—is a recurring, critical gap. And what I consistently see—whether in discussions with global consulting firms, specialized service providers, or businesses large and small—is a recurring, critical gap. This isn’t just my observation; it’s a reality confirmed by a major Microsoft and LinkedIn study, which found that while a commanding 79% of leaders feel AI adoption is critical to remaining competitive, a staggering 60% of them state that their company lacks a clear vision and plan to implement it. This disconnect highlights that most organizations simply lack the strong technological leadership and prepared workforce to manage such a transformation.

This gap is creating a powerful self-fulfilling prophecy. The belief in AI’s future profitability is compelling companies to lay off staff now to fund AI investment, which in turn accelerates the creation of the very technology that will make those roles redundant later. The engine of this prophecy is the eternal drive for shareholder value. And make no mistake—as an investor in the stock market, that engine is partially driven by you.

Be Aware of and Leverage the Delta (Δ)

Do you feel it? That persistent sense, ever since you were a teenager, that whatever the direction, life and society were always demanding more?

  • More study to get a better job.
  • More work to get a better salary.
  • More exercise on a regular basis just to stay in shape.
  • More training during your job to remain compliant and try to stay ahead.

Not only that, have you noticed that whatever you do, there is a rampant system that constantly pushes the rate of change itself? Like inflation that drives prices up, requiring higher salaries or forcing you to lower your living standards. Or the price of housing that keeps climbing, so you have a hard time buying your house—always hoping a better opportunity will come later, which never does, because when prices are low, mortgage rates are high. Your job is always requiring new skills because some technology or method is no longer efficient enough, or not trendy anymore—like the shift from Waterfall to Agile that suddenly rendered a Prince 2 certification seemingly obsolete. And why is everything about AI now? You feel you barely understood Crypto and Blockchain.

This, ladies and gentlemen, is what I call the Delta (Δ), inspired by the mathematical symbol representing the function of change.

The Delta is always on. It can never be turned off. It is not a bug; it is a feature of our modern world, hardwired into the very dynamics of market economies and the core of human psychology. We all want a better life, a higher standard of living, and we operate in a competitive environment of businesses whose primary reason for existing is to grow. Therefore, you have a choice. You can resist the Delta and be broken by it, or you can accept it. Embrace it. Change your perspective on it, and learn to ride the wave. You must ride it until we, as a global society, reach a point—through a provoked agreement or a catastrophe—where we decide that the Delta can only push the human psyche and nations as a whole so far.

Your Blueprint for Lasting Value in the New Economy

Many leaders look at this disruption and immediately jump to solutions like Universal Basic Income (UBI). Let me be unequivocally clear on where I stand: while I hold that unconditional support for those left without work is a fundamental pillar of a humane society, my critique of UBI is that it acts as a patch on a structural fracture. It addresses the symptom—a lack of income—while ignoring the deeper, coming crisis of agency and purpose. Furthermore, it completely sidesteps the great economic equation of our time: the widening disconnect between the effort a task requires, the value that work creates, and the way it is ultimately remunerated. It fosters dependency when the strategic imperative must be to cultivate autonomy.

The true path forward is not merely to distribute the spoils of this technological revolution, but to democratize the very means of its creation. The superior strategy is empowerment through universal access to the foundational tools of the new economy. This means powerful open-source AI and cheap, abundant computing, delivered as a utility service as fundamental and reliable as electricity or the telephone network. This is the architecture for genuine self-sovereignty, the preservation of dignity, and the creation of true equality of opportunity. After all, this new form of intelligence was trained on the collective data of humanity. Why, then, shouldn’t the tool itself be given back to us all?

That is the ideal, but you operate in the now. The Great Reallocation is already reshaping your reality, so while we strive for that future, you must secure your place in the present. This starts with an *upgrade* in how you view yourself.

Your survival and success hinge on a single, powerful concept: you must productize your craft and your uniqueness. This is no longer just advice for freelancers or entrepreneurs; it is the new imperative for anyone who is employed and wants to remain so.

In my work building and running businesses, I have come to a critical realization: the framework for launching a successful venture, which I codified in the AMASE Startups method, is no longer just for startups. It has become the operating manual for the individual. The battlefield has changed, and the strategies that build resilient companies are now the very same strategies that must build a resilient career.

Consider how each dimension now applies directly to you:

  • Your Personal Operating System (The Business Dimension): This is your strategic self. How do you operate? What is your unique value proposition, your personal business model that you bring into the larger organization? This is your architecture for creating value.
  • Your Craft as a Product (The Product Dimension): This is where you manage your unique expertise with the discipline of a product manager. It is the sum of your evolving competencies, your mastery of technology, and the tangible quality of your work. In this new market, your craft is the product on offer, and you must be relentless in its upgrades and iterations.
  • Your Cultural Signature (The Culture Dimension): This is the unique environment you initiate through your perspective, personality, speech, and actions. It is the set of principles that governs your work and interactions, creating a powerful and singular element of your moat that attracts those who resonate with your way of being.
  • Your Signal in the Noise (The Visibility Dimension): This is your personal brand, your discoverability. In a world saturated with information, how do you broadcast your value? It is your network, your reputation, your documented successes—your ability to be found by those who need your unique solution.
  • Your Economic Sovereignty (The Finance Dimension): This is your financial autonomy. It is your understanding of the economic value you generate, your skill in negotiating your worth, and your strategy for building financial independence beyond a simple paycheck.

Let this paradigm shift settle in, for it is the new law of professional gravity. The rule is simple: You are not an employee. You are a sovereign enterprise.

The Urgency of This New Reality

Why is embracing this shift feels so urgent? Because it presents you with a stark choice, a decisive fork in your professional destiny.

On one path, you become the architect of your own value, running your career with the discipline and foresight of a competitive business. You understand that your competition is not only between people, but with a holistically transformative technology that is redefining the very rules of the game.

The other path is one of passive resistance and inaction. It is the path where you undergo the pressure of assimilation. On this path, your complex cognitive skills are not just devalued; they are disaggregated—broken down into autonomous, independent units of work ready to be executed by artificial intelligence. Your holistic expertise is commoditized into a collection of tasks, becoming the new blue-collar labor of the information age. On this path, you become a cog in a system, pressured by other humans who are themselves obsessed with cost efficiency and keeping the OPEX down. In their world, you cease to be a strategic asset and become an adjustable variable in an Excel formula.

This is not a distant threat. It is the acceleration of an existing dehumanization. While this mindset only represents a fraction of corporate culture, it is a powerful and growing one. And for the first time, this new paradigm gives you the power to consciously outmaneuver it.

Your Immediate Action Plan: The Four Pillars & Three Habits

To become the architect of your own value, you must build your enterprise of one on four foundational pillars, reinforcing them with three non-negotiable habits.

Pillar 1: Evolve into the T-Shaped Orchestrator

The future does not belong to the shallow generalist—the “jack of all trades, master of none.” That model is obsolete. The new baseline for relevance is the T-shaped professional. This is an individual who grounds their broad, cross-functional knowledge (the horizontal bar of the T) in at least one pillar of deep, specialized expertise (the vertical stem of the T).

This distinction is important. As AI rapidly commoditizes generic, student-level knowledge, it effectively levels the playing field for anyone without a defensible specialization. Your deep expertise is the anchor that gives you the gravity and perspective to manage the broader landscape. It is the backbone that allows you to become an effective Orchestrator.

Your value will no longer be defined by a single, siloed skill, but by your capacity to manage a portfolio of outcomes by conducting a symphony of specialized intelligences. You will lead hybrid teams where highly specialized human experts work in concert with a new class of digital colleague: the hyper-efficient AI Agent. The power lies not in doing, but in orchestrating from a position of deep knowledge.

Imagine you are leading a project to launch a new IT application. Your role is that of the central conductor. You will:

  • Deploy a marketing agent to run a dynamic and targeted social media campaign.
  • Task an adversarial AI to act as your “red team,” relentlessly probing your application for security vulnerabilities.
  • Direct another agent to instantly construct a perfectly formatted product sheet from complex technical specifications.
  • And assign yet another to build and manage a customer survey and feedback system.

This role requires more than just project management; it demands a holistic understanding of the entire value chain—from customer journey to final delivery.

The Elite Advantage: Evolving to the PI-Shaped (Π)

For those who wish not just to thrive but to gain a truly dominant position in the post-AI economy, achieving a T-shape is the most decisive milestone. Yet, there is a higher level of evolution that confers an almost insurmountable advantage: becoming a Π-shaped (Pi-shaped) professional.

As I’ve detailed in my work on identifying rare talent, a Π-shaped professional builds on two deep pillars of specialization—for instance, one in a business domain like finance and another in a technology domain like data science. What gives this structure its immense power is the arch connecting these pillars: a mastery of an interdisciplinary practice, such as Enterprise Architecture and Project Management, which enables them to synthesize disparate fields into a single, coherent vision.

These individuals have a natural head start in the new economy. They are already wired to be the nexus, the strategic hub that can translate deep business needs into complex technological solutions, making them the ultimate Orchestrators. This is the aspirational path for those determined to lead.

Pillar 2: Build Your Moat on Experience, EQ & Artistry

As AI commoditizes IQ-based tasks, your human essence becomes your greatest differentiator.

  • The Emotional Quotient (EQ) Moat: This is your ability to collaborate, inspire, and add to a team’s cohesion. Destructive, selfish behaviors will become terminal liabilities.
  • The Artistic Factor: Your unique creative voice—your aesthetic sense, your storytelling, your capacity for original expression—is a beacon of distinction in a world of uniformity.
  • Your Personal Intellectual Property (IP): This is your most critical asset. It is the sum of your unique methods, success recipes, custom templates, and strategic frameworks forged from your direct experience and “battle scars.”

These elements combine to create your ultimate moat: The Experience.

A few years ago, Wouter Blokdijk, an eminent Architect who used to lead the Architecture Studio and ACOM—an event for and by the vibrant architect and engineer communities at ING—gave a memorable presentation about the power of “Stages.” It stuck with me. The power wasn’t only about the immense effort and the meaning of giving others a platform to express themselves, tell their story, and share knowledge. It wasn’t just about creating a platform that could be standardized. It was about the power to make experiences possible—experiences that touch both the rational and the emotional sides of our brain. This made me realize that Experience is the ultimate moat in the age of AI.

The Experience is what sets you apart from every other player on the market. We all know we need a smartphone to manage our lives, so why do we get so emotionally tense throwing arguments between a Samsung, an iPhone, a Google phone, or a Huawei? You’ve guessed it: the experience. You are experiencing a different feeling, a different dialog with the company and its community. The brand, this collective identity, this palette of sentiment—it feels different. And that difference matters. The product design above the functions, wrapped in an experience, matters. The story, and how you tell it, matters.

Another dimension of this moat, which is profoundly human, lies in the realm of sensory value.

Think about that feeling when you enter a French bakery. You are welcomed warmly by the “boulangère,” and immediately enveloped by a symphony of smells—the crisp baguette, the buttery “pain au chocolat,” the sweet “tarte aux pommes.” You chit-chat for a moment while ordering a sandwich made with fresh vegetables and bread straight from the oven, perhaps with a dollop of handmade mayonnaise, and you add a bag of light, sugary “chouquettes” for dessert. You say goodbye, and the whole encounter leaves you with a deep feeling of satisfaction, already anticipating your next visit.

This experience is unique, irreplaceable, and memorable. For the boulangère, the bakery is her “Stage.”

Her expertise lies in taste and scent, but the principles are universal—the touch and feel of a bespoke garment, the carefully curated ambiance of a store, the soul of high-end gastronomy. These are innovations that make sense primarily from human to human. Of course, AI can assist in the research and production of these things, but it cannot replace the human perspective required to truly understand them. Because ultimately, to empathize, communicate, sell, and bring value in the sensory world, you need the one thing an AI will never possess: a human body and the lived experience that comes with it.

Pillar 3: Embrace Entrepreneurship

The traditional career ladder (including the middle management layer) is being challenged. The future belongs to the entrepreneur, and this identity now takes many forms.

  • It can be the ‘solopreneur,’ a sovereign agent leveraging their unique expertise in the open market.
  • It can be the ‘founder,’ who rallies a team to build a new company from the ground up.
  • And critically, it can be the ‘intrapreneur’—the employee who acts as an agent of change, architecting new ventures and driving innovation from within the walls of their existing organization.

Whichever path you choose, the underlying mindset is the same: it is about proactively creating and capturing value, not just fulfilling a pre-defined role. It is about building constructive solutions that push your nation, society, and humanity forward.

While this path has traditionally involved navigating complex administration, the very forces driving this new economy are lowering the barriers to entry. The proof lies in the massive capital flowing not just to the tech titans, but to a new generation of agile, visionary startups. In Europe, for instance, France’s Mistral AI has mounted a formidable challenge to the US giants, raising over €600 million by providing powerful open-weight AI models and proving that strategic innovation can attract world-class investment. Meanwhile, UK-based Wayve is revolutionizing transportation, securing over $1 billion in a landmark funding round to build ’embodied AI’ for truly autonomous vehicles that can learn and adapt to any environment.

This lowering of barriers isn’t just financial; it’s profoundly technological. The advent of Generative AI and Augmented Coding (also known as Vibe Coding) is ushering in a no-code revolution. Building websites, applications, and other kinds of software is no longer the exclusive domain of specialist coders. Instead, you can architect solutions using natural language prompts in your own language. Pioneering platforms like Replit, Bolt.new, and Firebase.studio are taking this even further, abstracting away the complexities of the backend by managing your infrastructure for you.

Considering an application of moderate complexity, traditional barriers are evaporating. Your imagination, your focus, and your available time are now the primary constraints on what you can create.

Pillar 4: Be a Discoverer

Research is hot, trending, and now acknowledged as a major instrument of geopolitical soft power.

nature index 2024

The new global currency is not just capital; it is research talent, with nations actively competing to attract and retain the world’s sharpest minds. Look no further than the race for doctorates, where China now graduates more STEM PhDs annually than the United States, creating a seismic shift in the global talent landscape. This arms race for talent is mirrored in the explosive output of their work. 

This trend is not a matter of debate; it is a statistical reality, quantified with stunning clarity by the Nature Index 2025. The report confirms that China now decisively leads the world in high-quality research output, ahead of the US, Germany, the UK, and Japan. But the real story is in the momentum: China’s contribution surged by an incredible +17.4% in a single year (from 2023 to 2024). To put its lead into perspective, China’s output of high-quality publications is now over 5,343 points higher than the second-place United States and more than 26,714 points ahead of third-place Germany.

The Stanford Institute for Human-Centered AI’s 2025 report, for instance, highlights this exponential growth, showing that the number of AI publications has more than doubled since 2010, demonstrating a relentless acceleration of discovery.

This academic explosion has a practical, even more chaotic, counterpart. Consider the number of AI models published on Hugging Face, the de facto “super-marketplace” for the global AI community. As of today, the platform’s model count has skyrocketed, adding nearly one million new models in just the past nine months (1898890 in July 2025). It is a cognition explosion, happening in real-time.

This macro-trend finds its corporate manifestation in a “war for brains” raging between Google, Meta, OpenAI, and Microsoft. The simple act of recruitment has evolved into a high-stakes talent transfer market akin to that for FIFA and NBA stars, with compensation packages reaching into the hundreds of millions. Consider that the deals for elite AI researchers now exist in the same stratosphere as Kylian Mbappé’s estimated €320 million with Real Madrid across five seasons or Jaylen Brown’s landmark five-year, $304 million contract with the Boston Celtics.
Look no further than Microsoft’s 2024 deal to hire Mustafa Suleyman and the majority of his Inflection AI team—an unconventional “acqui-hire” valued at over $1 billion when accounting for licensing and other fees. This move was mirrored in mid-2025, when Meta poached Alexander Wang from Scale AI as if capturing a Mythical Pokémon—exceptionally rare, strategically crucial, and emblematic of a deeper ambition—to lead their newly formed ‘Superintelligence’ team, as part of a broader strategic investment involving a $14.3 billion (49%) stake in Scale AI. In both instances, these were not simple talent acquisitions; they were strategic investments in the very capacity for future breakthroughs and driving the “road to Artificial SuperIntelligence (ASI)”.

This dynamic extends far beyond just AI. It is the same in healthcare, with bio-engineers and researchers in genomics developing tools to revolutionize health. It is the same in defense and even in foundational science with the race for quantum computing. The competition for highly qualitative minds—people able to work in cutting-edge research teams—is the real invisible war. The goal of these teams is to produce the papers, the patents, and the commercial intellectual property that create a true, unassailable competitive advantage—a quantum leap of insight that remains, for now, far beyond the creative potency of any AI. To position yourself here, among the discoverers, is to place yourself at the highest and most secure echelon of the new economy.

Yet, even this moat is not eternal. We must acknowledge the stated ambitions of leaders like OpenAI’s Sam Altman, who openly seek to build AI models capable of making novel scientific discoveries themselves. We are not there yet, but it is a frontier to be watched with active vigilance.

The Three Foundational Habits

Acting on this framework requires discipline. These three habits are not just suggestions; they are the new requirements for professional survival and relevance.

But before we detail them, let’s observe how the future of work is already unfolding through clear, undeniable trends:

  • The Normalization of Personal AI: Personal AI assistants are rapidly becoming the norm in our lives. For our ten-year-old children, growing up with an AI will be as natural as it was for millennials to grow up with a smartphone.
  • The Incremental UI Absorption: Specialized application interfaces will gradually be absorbed by these personal AI assistants. Through API integration, advanced protocols for context-sharing (MCP) and agent-to-agent communication (A2A), these assistants will be able to reason across multiple applications and data sources, becoming a single, conversational front-end for our digital lives.
  • The Persistence of Unreliability: Despite advances like web search grounding, thinking models, and Retrieval-Augmented Generation (RAG), Large Language Models (LLM) still hallucinate. We must remember that their output is a synthesis of other humans’ content, which is not the same as verified, truthful fact.
  • The Law of Exponential Progress: The technology is only getting better, faster, and more potent. The performance gap between 2020’s GPT-3 and today’s state-of-the-art models is not just an iteration; it’s a light-year leap in capability.

Considering this new reality, I invite you to strengthen your sovereign agency with these four foundational practices:

  1. Sovereign Critical Thinking: This is the essential safeguard. You must cultivate a healthy skepticism towards AI-generated content and, more importantly, towards the claims of people and enterprises leveraging AI at scale—especially those operating in the “High-Risk AI” category defined by frameworks like the EU AI Act. This is about preventing lazy reasoning and refusing to outsource your judgment. The “how” of a process is often easier to challenge than the “what” of a stated fact, yet to build a true capability, you need to master both. Honing your critical thinking makes you a more discerning user of AI, which in turn increases the velocity of your own training and gives you an edge faster than those who accept its output uncritically.
  2. Continuous Learning: This is paramount because the Delta never stops. You must leverage modern tools to your advantage, using AI itself as an engine for comprehension. Dive into platforms like ChatGPT and the information streams on X to accelerate your learning and keep pace with a world that refuses to stand still. This is your first line of defense against obsolescence.
  3. Continuous Practice: This is where theory is forged into capability. It is not enough to think you know how something is done; you must know how to do it through direct, relentless application. Practice is how you accumulate the concrete examples, the case studies, and the definitive experience that form the bedrock of your personal IP. It is through doing that you gain the tangible proof of your value.
  4. Engineered Serendipity: In a world overflowing with noise, you cannot simply wait to be found; you must engineer the conditions for opportunity to come to you. This isn’t about shouting louder than everyone else; that is a defective and inefficient strategy. True serendipity is engineered by building a believable value proposition rooted in the tangible assets you created through practice. It is the deliberate combination of your sovereign thinking, your continuous learning, and your proven experience that creates a gravitational pull for the most meaningful opportunities, allowing you to be “picked” when it matters most.

In a world that seeks to commoditize your talent into a line item in an Excel formula, becoming the architect of your own enterprise is the ultimate expression of sovereign agency and the only way to truly ride the Delta.

But the Delta, as powerful as it is, is not the ultimate source. It is probably the most visible expression of a deeper, more fundamental law of our hyper-connected world: The Law of the Equilibrium Imperative. In a future article, we will dive into this foundational principle and its one immutable rule: a system will always find a new equilibrium, and you can either be a willing architect of it or a casualty of the adjustment.

Categories
Agents Artificial Intelligence Business Business Strategy Copilot Information Technology Innovation Microsoft

⭐ Microsoft brings “Everyday AI” to over 320 millions MS 365 users!

Yesterday, I received a notification that my Microsoft 365 subscription now includes MS Copilot 365 AI credits.

It’s a smart move to integrate AI tools more broadly, especially when considering that Microsoft 365 has over 320 millions daily active users globally (as of 2024), and more than 2.3 millions companies using the “office” productivity suite.

According to the FAQ, the Personal and Family plans contains 60 AI credits.

What’s an AI credit? I quote: “A credit is counted each time you specifically request a Copilot or equivalent AI services action, such as generating text, a table, or an image.”

My experience with Copilot 365 so far has shown incredible productivity boosts in MS Teams and Excel. However, PowerPoint still feels like it needs refinement.

I’m very keen to explore specialized versions of Copilot like the Project Management Copilot to enhance team efficiency further.

Have you tried MS 365 Copilot?

What are your experiences with MS Copilot so far?

Let’s share our experiences.

Categories
Artificial Intelligence Automation Business Business Strategy Engineering Innovation Robots Strategy Technology Technology Strategy

Update on Tesla’s Optimus #Robot – it is progressing fast

Tesla’s Optimus Robot learning from humans

The most impressive part is the technique employed by the Tesla team for accelerating the robot’s dexterity: the robot physically learns from human actions. 

Now, let’s step back and analyse Tesla’s master plan here:

(Putting on my business tech strategy goggles) 

1. Tesla builds electric cars augmented with software programmability.

2. Tesla provides an electric grid as a service.

3. Tesla builds gigafactories that maximize the automation of car manufacturing. Almost every single part of the pipeline is robotized and optimized for speed of production.

4. Tesla builds Powerwalls (by providing energy storage, it also creates a decentralized power station network).

5. Tesla brings autonomous driving (FSD) to Tesla cars. Essentially, cars are now transportation robots governed by the most advanced AI fleet management system.

6. Tesla builds its own chips (FSD Chip and Dojo Chip)

7. Tesla builds its own supercomputers.

8. Tesla launches Optimus, which aims to replace the human workforce in factories and warehouses.

9. X.ai, which has recently raised $6 billion, X’s supposedly “child” AI company, brings the Grok AI model trained on X/Twitter data. While you may say X data is not the best, X has a algorithm balanced with human judgment (community notes), AND the company regroups the largest set of news publishing companies. Basically, it automates curation and accuracy.

10. A version of the Grok AI model will likely power Optimus’s human-to-robot conversational interface.

11. Tesla cars will be turned into robotaxis, disrupting not only taxi companies but also Uber (the Uber/Tesla partnership may not be a coincidence), and eating into the shares of Lyft and BlaBlaCar.

12. Tesla will enter the general services business, and retail industries to offer multi-purpose usage robots – cleaning services for business offices, grocery stores, filling the workforce shortage in the catering (hotel-restaurant-bar…) industry, etc.

Tesla is not the only one moving in the “Robot Fleet Management” business. Chinese companies like BYD (EV) offer strong competition, and there are several robot startups (like Boston Dynamics and Agility Robotics) racing for the pole position.

#AI #artificialintelligence #Robotics #Optimus #EV #software #EnergyStorage #Automation #powerwall #AutonomousVehicles #FSD #chips #HighPerformanceComputing #Robots #GrokAI #NLP #robotaxis #innovation #WorkforceAutomation

Categories
Strategy Architecture Artificial Intelligence Blockchain Business Business Strategy Enterprise Architecture Organization Architecture Technology Technology Strategy

Architecting the Future: How RePEL Counters VUCA for Modern Enterprises

I was first introduced to the term VUCA by my ex-colleague, Julian TROIAN, a leader in coaching who steers the talent management practice. This revelation came during a particularly challenging phase for us, mirroring the struggles of many other companies. We found ourselves navigating the intricacies of the COVID lockdown while simultaneously undergoing a significant shift in the corporate way of working. Our project portfolio was expanding, driven by the rapid pace of transformations, and we felt the weight of increasing regulatory pressures. But we recognized that these challenges were not ours alone. Then, significant disturbances emerged: the Eastern Europe conflict and a surge in inflation, to name a few.

Moreover, the world stood on the brink of simultaneous technological revolutions. Innovations like blockchain and the nascent promise of the metaverse hinted at new horizons. Yet, it was the seismic shifts brought on by Generative Artificial Intelligence that seemed most profound.

VUCA is an acronym encapsulating the themes of vulnerability, uncertainty, complexity, and ambiguity. Herbert Barber coined the term in 1992 based on the book “Leaders: The Strategies for Taking Charge”. I believe many can relate to these elements, sensing their presence in both professional settings—perhaps during office hours—and in personal moments with family.

Life, in its essence, might be described by this very term. We all traverse peaks and lows, facing situations of heightened complexity or vulnerability. The challenge is not just to navigate these periods but to foster strength and ingenuity, arming ourselves for future obstacles.

I consider myself fortunate to have garnered knowledge in enterprise architecture—a domain that inherently equips any organization, product, or service with resilience, making adaptability part of its very DNA.

In the subsequent sections, I explore strategies for developing VUCA antibodies.

From Vulnerability to Resilience: Building an Unshakable Future

Rather than getting bogged down by vulnerabilities, it’s about harnessing resilience. Robustness is the key to building thick layers of protection, ensuring longevity in our ventures. By deliberately creating anti-fragile mechanisms, we’re better prepared for tough times. This resilience doesn’t just happen; it’s constructed. Architects weave it into their designs across various realms:

  • Information Systems: These are designed to be failure resistant. Potential mistakes and erratic behaviors are predicted and integrated into the system as possible anomalies. In such events, responsible teams must give clear procedures to users, operators, and administrators to restore the system to its standard operational mode.
  • Data Management: From acquisition and processing to analytics and visualization, there’s complete control over the data flowing into the system. This range from a service request made over the phone, a command initiated by an AI, or even a tweet that prompts the system to respond.
  • Security: Safeguarding the system against potential hacks is crucial. Additionally, it’s vital to design the system in a way that vulnerabilities don’t open doors for intrusions. Depending on the chosen architectural delivery method, this can be addressed proactively or reactively.
  • Infrastructure: The foundational physical infrastructure, tailored to the system’s needs, must be aptly dimensioned. At times, specialized hardware like GPU-driven servers, or programmable network devices might be essential to cater to particular needs during both the development and operational phases.
  • Organization: People, integral to the corporate ecosystem, influence the system’s effectiveness. Their actions and behaviors enhance system efficiency, especially when elements like trust, making amends for failures, regular maintenance, and adaptability to change are activated.

All these aspects aren’t mere byproducts; they’re deliberately designed system features.

From Uncertainty to Probable Planning: Navigating with Confidence Through Uncertain Waters

Predicting the future is beyond anyone’s capability, but architects can narrow down scenarios to the most probable outcomes. Through modeling techniques like system design, trend analysis, scenario planning, and causal loops, they can forecast with a higher degree of accuracy. However, the planning phase isn’t without challenges:

  • Resources: There are times when constraints in time, finances, skills, and materials can make a proposed solution unfeasible. Recognizing this early on is vital.
  • Leadership: A wavering decision-maker, filled with doubt, can be a significant impediment. This is a leadership challenge that needs addressing at the top. In such a situation, the architect must highlight the unstable matter with benevolence and candor.
  • Team: The implementation is only as good as the team behind it. If team members don’t possess the necessary skills or their abilities don’t align with the mission’s complexity, especially when executing multiple plans simultaneously, it will compromise the execution of the plan.
  • Expertise: last but not least, the architect’s seniority and the time allocated to address your transformation’s VUCA elements also play a critical role.

From Complexity to Engineering: A Blueprint for Simplification

Sometimes, complexity arises from perception, misunderstanding, or underestimating a situation – often, it’s a mix of these elements.

Imagine you have three wooden chairs, and you wish to create a sofa. Is it even possible? Fortunately, Ikea offers a DIY toolbox that can help you realize this vision. When you describe your idea to the store specialist, she confidently directs you to aisles A8 to C12 for the necessary components. At first, you feel relief. But soon, doubts about your abilities confront you. Even with your experience in crafting wooden furniture, you’re unsure about the mechanisms you’ll need, the type of finish to choose, the tools required for precise cuts, and the best materials for durability. Are these materials environmentally friendly? This confusion and uncertainty are akin to experiencing VUCA.

The architect’s role is to first understand the complexity, determine the facts, and uncover what’s unknown, converting it to known information. Then, the challenge or problem is segmented into manageable pieces. I refer to this process as “Undesign.” The goal of undesigning is to get a clear and detailed view of the end goal by atomizing the current state, structure, and behavior. This is achieved through methods like decomposition, deconstruction, alternate system modeling, and sometimes reverse engineering. Subsequently, the architect uncovers a path to transform and assemble these components.

The essence of engineering is to assemble these components using identifiable, simple building blocks. These blocks are selected, modified, added, and connected in a logical order, ensuring the right materials, technologies, and tools are used. People with the right skills can then efficiently bring the project to life, ensuring it’s as seamless and enjoyable as possible. Even the user’s psychological experience matters!

In summary, what seems intricate and complex can be distilled into simpler, manageable parts.

From Ambiguity to Lucidity: Transitioning from Wishful Thinking to Tangible Outcomes

Architects don’t just exist in the present; they shape the future. Their responsibilities lie in meticulously designing and planning changes that will inevitably impact an organization’s products or services. Any vision, no matter how abstract, becomes initially tangible through their work. They ensure this by providing explicit construction instructions, detailed models of the final product, and ensuring the requisite resources and skills are in place. By doing so, architects play a pivotal role in turning ambiguity into precision.

Moreover, it’s the architect’s responsibility to align ambitions with the resources available, ensuring that goals are realistically achievable.

In wrapping up, VUCA can be perceived as a daunting challenge. But, with the right leaders onboard, RePEL becomes a natural response to unfriendly environments and stressful times. They hold the key to transforming volatile situations into clear, well-defined future pathways, keeping the enterprise entropy under control.

Categories
Technology Bitcoin Blockchain Business Business Strategy Cardano Cryptocurrencies Ethereum How to Polkadot Strategy Technology Strategy Web 3.0

How to grasp the blockchain world and safely walk your first steps into Web 3.0

blockchain

The following is a quick guide explaining how to become acquainted with the world of blockchain, crypto, and web 3.0:

  1. First, I invite you to start with these videos:
    1. What is a Blockchain: https://youtu.be/rYQgy8QDEBI
    2. The difference between Bitcoin and Ethereum blockchains: https://youtu.be/0UBk1e5qnr4
    3. What is a Smart Contract: https://youtu.be/ZE2HxTmxfrI
    4. What is a Stablecoin: https://youtu.be/pGzfexGmuVw
    5. What is an NFT: https://youtu.be/FkUn86bH34M
  2. Understand the key concepts of web 3.0 by googling them: Blockchain, Wallet, Cryptocurrency, (crypto) token, Mining, PKI, tokens, Smart Contracts, Dapps, Decentralized Exchanges (DEX), Staking, ICO, ITO, Layer 1/2/3 protocols, transaction fees, consensus, etc.
  3. Know what are the major Web 3.0 technologies, their differences, and their value propositions like Bitcoin, Ethereum, Polkadot, Cardano, Cosmos, Polygon, Hyperledger, IPFS, Storj, Solana, Tether, etc. Not only the network but also the development tooling and the distribution means.
  4. Understand what new business models, organization models, like DAO, and features the Web 3.0 is bringing with respect to Web 2.0. Then research how Web 2.0 and 3.0 complement each other.
  5. Select one Blockchain technology and stick to it, in the beginning, to understand how Dapps are being built, distributed, and promoted in the ecosystem. Some of the most popular depending on your areas of interest: Uniswap (DeFi), OpenSea (Digital Art, NFT), Axie Infinity (Gaming), …
  6. Understand token economics and how it is possible to have such a huge valuation and market capitalization.
  7. Learn by doing!
    • Learn to use blockchain tools like Etherscan and Bitcoin Explorer, to see all Ethereum Blockchain transactions. And now is the time to look up your own wallet!
    • Then, you could fund your wallet using the most popular and safest Crypto Trade Exchanges like Kraken, Coindesk, or Crypto.com.
      Notice that you can buy cryptocurrencies with Paypal, but you currently cannot transfer them to your own wallet. Paypal is holding bitcoin for you.
  8. Follow the various companies and foundations expanding the web 3.0 (tech websites, Twitter) to grasp how the ecosystem is expanding. Then, ask yourself how these companies are regulated.
  9. Interact on LinkedIn, Twitter, and Reddit with knowledgeable people and enthusiasts.
  10. If you are an IT engineer, start programming with Solidity. I find the Truffle Suite genuinely good to build Smart Contracts and NFTs in an easy way.
Categories
Data Architecture Business Business Strategy Data Information Technology Legal Technology Strategy

The European Data Act: actually, can your data become a reliable source of income?

data economy 1

The European Data Act has recently been published.

It aims at clarifying and strengthening the governing framework of the #dataeconomy.

In the nutshell (extract):

“The Data Act will give both individuals and businesses more control over their data through a reinforced data portability right, copying or transferring data easily from across different services, where the data are generated through smart objects, machines, and devices.”

For example, a car or machinery owner could choose to share data generated by their use with its insurance company.

Such data, aggregated from multiple users, could also help to develop or improve other digital services, e.g. regarding traffic, or areas at high risk of accidents.”

Some thoughts on this

1️⃣ I wonder to what extent the boundaries of your data ownership can be explicitly defined, then transparently coded in IT systems, so that a “data asset” is legally bound to you as your property.

2️⃣ After this, you could ask Facebook, Instagram, and TikTok to share a piece of the cake: % of the revenue generated from your data.
Let’s face it, it looks like a game-changer, if it can really be implemented.

3️⃣ Ultimately, you can capitalize on GPDR architecture. It pushes the concepts of data ownership, consent management, data counters, data KPI, data censorship management, IAM, data expiry management, etc.

4️⃣ Beyond multicloud oversight solutions, this is an excellent use case for permissioned blockchain, like Hyperledger Fabric. (e.g. Infrachain )

5️⃣ Innovative business models to arise like “Mutual Data Funds”, or Open Data Lakes”, where a set of businesses or individuals would provide a set of qualified and certified data sources to act as “Value Added Data Sources”, something similar to Bloomberg or Reuters for financial News.

Also, these Mutual Data Pools are fitted to be plugged as Oracles in blockchains (#ethereum#chainlink#binance, etc.)

I can already envision the pitch of startups like “We are the Bloomberg of space mining Data” (which would be awesome by the way👍)

6️⃣ This could boost the API economy. But also push further the adoption of GraphQL and AsyncAPI standards.

7️⃣ I reckon open industry data models are a much better way to start. It would help regulators (e.g. Commission de Surveillance du Secteur Financier (CSSF) , CNPD – Commission nationale pour la protection des données , CNIL – Commission Nationale de l’Informatique et des Libertés), auditors and regtech (e.g. Scorechain ) to have a common ground to build their control frameworks and oversight infrastructure.
Now, it is time to stitch them together.

Links