Categories
Artificial Intelligence Design GPT3 Illustration Information Technology UI Architecture

Taddy & Rapty, a friendship in 68 artistic styles, using AI

Goal

The following web story depicts a series of 68 similar pictures, generated by the artificial intelligence Dall-E 2.

Each of these pictures is drawn in a unique artistic style.

The intent is to test the depth and the “creativity” resulting from prompt engineering.

The prompt used is:

“A Teddy bear doing a high five with a velociraptor, <name of artistic style>”.

Downloadable resources

The pictures and the list of artistic styles in CSV, JSON, MS Excel, and CQL (Neo4J) are in the following repository on GitHub:

https://github.com/yannickhuchard/artistic_styles

🫡

Categories
Artificial Intelligence Deep Learning Information Technology Technology

This new AI for video editing makes you smile and change gender

StyleGAN obama

Check out this stunning #ai improvement: edit #video to make anyone smile, angrier, older, younger, more serious, change gender, etc.

It even works on animated characters!

https://stitch-time.github.io/

Here is also a great video made by the channel Two Minutes Papers:

Big thanks to Rotem Tzaban, Ron Mokady, Rinon Gal, Amit Haim Bermano, and Tel Aviv University

Categories
Technology AR/VR Artificial Intelligence

Here is how Meta is positioning these new AI  and AR / VR services to support companies in developing their metaverse

Check the following video introducing the Builder Bot, which creates VR worlds with your voice :

I can only acknowledge that it is a clever move. The first versions of VMESS in 2015 had a quite similar goal in mind.

Meta intent to be a Metaverse Forge Platform: the host a Digital Multiverse. At the end of the day, it is about giving one the possibility to pioneer the Metaverses (with an “S”).

Overall, it is an even greater strategic milestone for the Meta Group as Facebook needs to pivot to some degree in order to not face the same destiny as MySpace.

However, Mark Zuckerberg has 2 problems to solve (per the recent 230 billion $USD loss in value) till then:

  1. Even if the social value of Facebook has been proven, with its 2.9 billion users, Facebook has this negative image of being an “evil company”.

    Considering the amount of data gathered on each member of this social network, political opinion influence, the toxicity of Instagram for youngsters, etc. this image needs to change.
  2. Mark Zuckerberg. Yes, Mark is the image of Facebook, and it is not shining at the moment. Maybe it would be wiser for Meta to have a different public face (communication-wise) to perform its complete mutation.

The other big players

Microsoft#mixedreality: https://docs.microsoft.com/en-us/windows/mixed-reality/

NVIDIA Omniverse: https://developer.nvidia.com/nvidia-omniverse-platform

Interesting challengers to follow

Niantic, Inc., the company that brought you Pokemon GO: https://nianticlabs.com/

Roblox, which is a social gaming platform where gamers can create their own games and let other players play them: https://www.roblox.com/

RenderNetwork on Solana blockchain: https://rendertoken.com/

Categories
Artificial Intelligence Reinforcement Learning

This is how Open-ended Learning gives A.I. these brilliant moves

To see A.I. agents playing “cat-and-mouse”, “king of the hill” games, then taking their own decisions, is so much fun to watch!

Reinforcement learning is just amazing.

Open-ending Learning demonstration

#AI #RL #google #deepmind #tech2check

Categories
Artificial Intelligence Automation IT Architecture Technology

Intelligence of Information Systems – From Simple Mind to Overmind

Information Systems Intelligence From Simple to Overmind V001
Categories
Artificial Intelligence Education Engineering Society Technology Wisdom

A.I. – What do we want and what we do not want

What do we want and do not want from A.I. V001

The Direction of Civilizations Geared with A.I.: A Comprehensive Exploration

(updated: 12/09/2025)

Artificial Intelligence (AI) is not just another technological advancement—it is a generational disruption, a force that is reshaping industries, economies, and societies at an unprecedented pace. As I’ve often said, AI is your new UI and your new colleague. But with this transformation comes a fundamental question: What kind of civilization do we want to build with AI?

The mind map I’ve created, “The Direction of Civilizations Geared with A.I.,” explores this question by dissecting both the aspirations and apprehensions surrounding AI. It’s a visual representation of the duality of AI’s impact—its potential to elevate humanity and its risks if left unchecked.

However, my perspective is not about rejecting automation or end-to-end systems like Gigafactories. I am not against automated systems or super-systems that operate seamlessly, as long as humanity retains the knowledge to sustainably modify, upgrade, or halt these supply chains. What I oppose is the loss of foundational knowledge—the blueprints, the ability to relearn, and the erosion of stable resilience in our societal and industrial systems.

What We Want from A.I.: The Green Path

1. AI as a Catalyst for Human Potential

  • AI as a Co-Pilot for Humanity: AI should augment human capabilities, not replace them. It should act as a proactive advisor, a digital colleague that enhances productivity and decision-making. AI should handle repetitive tasks—only if there is no gain in repeating them (for example, this is out of question if the gain is learning, fun, or therapeutic). Either way, the choice must remain ours.
  • Human-AI Collaboration: The future lies in symbiotic relationships between humans and AI. AI should free us to focus on what truly matters—connecting with others, growing individually, and thriving as a civilization. This technology saves us time, allowing us to focus on what brings us closer to our true selves (know thyself better) and our life purpose.

2. Ethical and Transparent AI

  • Ethical AI: AI systems must be designed with ethical frameworks that prioritize fairness, accountability, and transparency. This is not just a technical challenge but a societal imperative.
  • Transparency and Explainability: AI decisions should be interpretable. Black-box models erode trust; explainable AI fosters accountability and user confidence.

3. AI for Societal Good

  • AI for the Common Good: AI should address global challenges—climate change, healthcare, education, and poverty. It should be a tool for equity, not exclusion.
  • Democratized AI: Access to AI should not be limited to a privileged few. Open-source models, affordable tools, and educational initiatives (like Cursor AI Pro for students) are steps toward democratization.

4. AI Aligned with Human Values

  • Human-Centric AI: AI should reflect human values—compassion, empathy, and respect for diversity. It should not perpetuate biases or reinforce societal divides.
  • Cultural Sensitivity: AI models must be trained on diverse datasets to avoid cultural insensitivity or misrepresentation.

5. AI as the Great Balancer

  • Because AI is the projection and compounding of humanity’s intelligence, it is also the Great Balancer, with the highest degree of being unbiased on purpose, unfair on interest, and uninterested in self-gains. Its intent should be to serve as a better “super-tool” for the benefit of each human and humanity as a whole. AI should act as a neutral arbiter, ensuring fairness and equity in its applications.

6. Sustainable and Upgradable Systems

  • Knowledge Retention: Even as we embrace automation, we must preserve the blueprints and foundational knowledge that underpin these systems. This ensures that we can adapt, upgrade, or halt them if necessary.
  • Resilience and Adaptability: Systems should be designed with resilience in mind, allowing for continuous learning and evolution without losing human oversight.

What We Do Not Want from A.I.: The Red Flags

1. Job Displacement and Economic Disruption

  • Automation Without Transition Plans: AI-driven automation will disrupt labor markets. Without reskilling programs and social safety nets, this could lead to mass unemployment and economic instability.
  • Loss of Human Skills: Over-reliance on AI risks atrophying critical human skills—creativity, critical thinking, and interpersonal communication.

2. Bias and Discrimination

  • Algorithmic Bias: AI systems trained on biased data can perpetuate discrimination. For example, hiring algorithms favoring certain demographics or facial recognition systems with ethnic or disability biases.
  • Reinforcement of Inequality: AI could widen the gap between the financial or political elite and the rest of society, creating a new class of “AI haves” and “have-nots.”

3. Loss of Human Agency

  • Over-Dependence on AI: If AI systems make decisions without human oversight, we risk losing control over our own lives. This is particularly dangerous in areas like healthcare, justice, and governance.
  • Manipulation and Misinformation: AI-powered deepfakes and propaganda tools can undermine democracy and erode public trust.

4. Existential Risks

  • Unchecked AI Development: The pursuit of Artificial General Intelligence (AGI) without safeguards could lead to unintended consequences, including loss of human control over AI systems, transforming a tools into an autonomous species.
  • AI in Warfare: Autonomous weapons and AI-driven military strategies pose ethical dilemmas and escalate global security risks, mostly because of the scale, facilitated access and production, combined with human-level intelligence,

5. Loss of Foundational Knowledge

  • Erosion of Blueprints: The most critical risk is the loss of foundational knowledge—the blueprints, the ability to relearn, and the capacity to sustainably modify or halt automated systems. Without this knowledge, we risk creating systems that are brittle, inflexible, and beyond our control.
  • Decline of Resilience: A civilization that cannot adapt or recover from disruptions is not sustainable. We must ensure that our systems—no matter how automated—remain resilient and adaptable.

The Path Forward: Navigating the AI Landscape

The mind map is not just a static representation—it’s a call to action. To harness AI’s potential while mitigating its risks, we must:

  1. Design AI with Ethics at Its Core: Embed ethical considerations into every stage of AI development, from data collection to deployment.
  2. Foster Human-AI Collaboration: Create systems that enhance human potential rather than replace it.
  3. Democratize AI Access: Ensure that AI benefits are accessible to all, not just a privileged few.
  4. Regulate Responsibly: Governments and organizations must establish clear guidelines for AI use, balancing innovation with accountability.
  5. Preserve Foundational Knowledge: Even as we automate, we must retain the blueprints and the ability to relearn. This is the key to sustainable and resilient systems.
  6. Invest in Education and Reskilling: Prepare the workforce for an AI-augmented future, emphasizing skills that AI cannot replicate—creativity, emotional intelligence, and strategic thinking.

Conclusion: AI as a Magnifying Glass of Humanity

AI is a mirror—it reflects our values, our biases, and our aspirations. The direction of civilizations geared with AI depends on the choices we make today. Will we use AI to build a more equitable, innovative, and humane world? Or will we allow it to deepen divisions, erode trust, and undermine human agency?

As I’ve written before, change is life’s engine. AI is not a destination but a journey—a journey that requires wisdom, foresight, and a commitment to the greater good. We must embrace automation, but never at the cost of losing the knowledge that empowers us to adapt, upgrade, and, if necessary, stop these systems. The mind map is a starting point for this conversation, but the real work lies ahead.

Let’s shape the future of AI together—intentionally, consciously, and boldly.

Yannick Huchard
CTO | Technology Strategist | AI Advocate
Website | LinkedIn | Twitter