Unknown's avatar

About Miguel Chamochin

Digital, Data & AI Transformation. @Accenture @AgendaCAF & Professor @IEuniversity. Passionate about this wonderful planet. Dad, lifelong learner, runner.

Rethinking Data Operating Models: One Size Doesn’t Fit All

If your data strategy isn’t delivering business impact, it’s time to rethink the Data Operating Model (DOM) behind it.
We often focus on tools and platforms, but without the right DOM, even the best data strategies struggle to scale, govern, or generate ROI. DOMs align strategy with execution—embedding governance, literacy, and data quality across the enterprise.

Five Proven DOM archetypes:

1.- Descentralized – Domain-led, mesh-style team
 ▪️ Pros: Flat, aligned with lines of business
 ▪️ Cons: Ownership gaps, legal risk
2.- Network – RACI-based structure layered over decentralization
 ▪️ Pros: Clarifies roles, retains flexibility
 ▪️ Cons: Complex to maintain
3.- Centralized – One team owns all
 ▪️ Pros: Speed, control
 ▪️ Cons: Low agility, tough for transformation
4.- Hybrid – CoE leads, domains execute
 ▪️ Pros: Best-practice factory
 ▪️ Cons: Hard to align, costly
5.- Federated – Subsidiaries empowered with central governance
 ▪️ Pros: Works at global scale
 ▪️ Cons: Requires maturity and resources

There’s no perfect model—just the one that best fits your size, culture, regulation, and maturity. DOMs should evolve: start lean, then grow as literacy, tech, and governance mature

Practitioner Takeaways
 • Anchor in a problem-back value story
• Publish a one-page DOM charter: integration, funding, accountability
 • Pilot federated or network models before scaling
 • Build trust by staffing CoEs with rotational talent
 • Track both data KPIs (e.g., completeness, timeliness) and business KPIs (e.g., ROI, forecast uplift)

Which model are you using? What’s working—and what’s not? Let’s elevate the conversation.

Why 90% of Companies Fail at Digital Transformation (And How Modular Architecture + AI Fixes It)

Here’s a hard truth: Most enterprise architectures are built like medieval castles—impressive, rigid, and completely useless when the world changes overnight.

The $900 Billion Problem No One Talks About

While executives throw billions at “digital transformation,” they’re missing the fundamental issue. It’s not about having the latest tech stack or hiring more developers.

It’s about architecture.

Think about it: You wouldn’t build a house without blueprints, yet companies are running multi-billion dollar operations on architectural chaos. The result? They can’t adapt fast enough when markets shift, competitors emerge, or customer needs evolve.

The Four Pillars That Make or Break Your Business

Every successful enterprise runs on four architectural foundations. Get one wrong, and your entire digital strategy crumbles:

1. Business Architecture: Your Money-Making Blueprint

This isn’t corporate fluff—it’s how you actually create value. Your business models, processes, capabilities, and strategies either work together like a Swiss watch, or they’re fighting each other like a dysfunctional family.

Red flag: If you can’t explain how your business creates value in one sentence, your architecture is broken.

2. Data Architecture: Your Digital Nervous System

Data is the new oil, but most companies are drilling with stone-age tools. Your data models, flows, and APIs should work seamlessly together, not require a PhD to understand.

Reality check: If finding the right data takes your team hours instead of seconds, you’re bleeding money.

3. Application Architecture: Your Digital Muscles

Your applications should be lean, mean, and modular. Instead, most companies have Frankenstein systems held together with digital duct tape.

Warning sign: If adding a simple feature requires touching 15 different systems, you’re in trouble.

4. Technology Architecture: Your Foundation

This is your infrastructure, networks, and security. It should be invisible when it works and obvious when it doesn’t.

The test: Can you scale up 10x without your systems catching fire? If not, you’re not ready for growth.

The Million-Dollar Dilemma Every CEO Faces

Here’s where it gets real: Every business faces the same impossible choice—perform today or transform for tomorrow.

  • Focus on core business = make money now, but risk becoming irrelevant
  • Focus on transformation = maybe make money later, but struggle today

Most companies choose wrong. They either become innovation-paralyzed cash cows or transformation-obsessed startups that never turn a profit.

The Game-Changing Solution: Modular Architecture

Smart companies have figured out the cheat code: modularity.

Instead of choosing between today and tomorrow, modular architecture lets you do both. Here’s why it’s pure genius:

Adapt in days, not years when markets shift
Scale individual components without rebuilding everything
Test new ideas without risking core operations
Pivot instantly when opportunities emerge

Real talk: Companies with modular architecture adapt 3x faster than their competitors. While others are still having meetings about change, modular companies are already capturing new markets.

Where AI Becomes Your Secret Weapon

Here’s where it gets exciting. AI isn’t just another tool—it’s the ultimate architecture amplifier. But only if you use it right.

At the Business Level: AI predicts market shifts, mines hidden process insights, and simulates business models before you risk real money.

At the Data Level: AI automatically cleans your data mess, detects anomalies you’d never catch, and creates synthetic data for testing without privacy nightmares.

At the Application Level: AI monitors your systems 24/7, generates code that actually works, creates self-healing applications, and automates testing that would take humans months.

At the Technology Level: AI manages your cloud infrastructure, fights cyber threats in real-time, and optimizes everything automatically.

The Bottom Line (And Why This Matters Right Now)

The companies winning today aren’t the ones with the biggest budgets—they’re the ones with the smartest architecture.

While your competitors are stuck in architectural quicksand, modular architecture + AI gives you superpowers:

  • React to market changes in real-time
  • Launch new products at lightning speed
  • Scale without breaking everything
  • Innovate without sacrificing stability

Your Next Move

The brutal reality: Every day you delay building modular architecture is another day your competitors get further ahead.

The companies that embrace this approach won’t just survive the next market disruption—they’ll be the ones causing it.

The question isn’t whether you should build modular architecture enhanced by AI.

The question is: Can you afford not to?


What’s your biggest architectural challenge right now? Share in the comments.

Beyond Compliance: How Dora Is Reshaping Financial Resilience into Competitive Advantage

Four months into full applicability, the Digital Operational Resilience Act (DORA) is proving more complex than anticipated. Financial institutions are navigating a fast-evolving regulatory landscape shaped by fragmented supervisory readiness, expanding technical requirements, and increasing market expectations.

Key takeaways:
* DORA is not a one-off checklist—it’s a multi-phase transformation touching governance, third-party risk, cyber resilience, and operational continuity.
* Mapping critical processes and ICT dependencies is now foundational.
* Third-party risk management must go beyond tick-box audits—dynamic oversight and contract readiness with cloud providers are essential.
* Operational resilience testing—including Threat-Led Penetration Testing (TLPT)—requires new levels of maturity and coordination.
* Compliance must shift from paper to practice—through automation, testing, and real-world response capabilities.

Strategic priorities for 2025–2026:
* Focus on business-critical ICT dependencies
* Strengthen third-party risk management
* Engage proactively with regulators
* Operationalise continuous compliance

Institutions that embed resilience—not just demonstrate compliance—will gain long-term advantage.

AI’s Black Box Nightmare: How EU IA Act Are Exposing the Dark Side of GenAI and LLM architectures

With the EU AI Act entering into force, two of the most 𝐜𝐫𝐢𝐭𝐢𝐜𝐚𝐥 𝐫𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬 for high-risk and general-purpose AI systems (GPAI) are 𝐄𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐢𝐥𝐢𝐭𝐲 and 𝐅𝐚𝐢𝐫𝐧𝐞𝐬𝐬. But current GenAI and LLM architectures are fundamentally at odds with these goals.
𝐀.- 𝐄𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐛𝐚𝐫𝐫𝐢𝐞𝐫𝐬:
* 𝐎𝐩𝐚𝐪𝐮𝐞 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞𝐬: LLMs like GPT or LLaMA operate as high-dimensional black boxes—tracing a specific output to an input is non-trivial.
* 𝐏𝐨𝐬𝐭-𝐡𝐨𝐜 𝐈𝐧𝐭𝐞𝐫𝐩𝐫𝐞𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐋𝐢𝐦𝐢𝐭𝐬: Tools like SHAP or LIME offer correlation, not causality—often falling short of legal standards.
* 𝐏𝐫𝐨𝐦𝐩𝐭 𝐒𝐞𝐧𝐬𝐢𝐭𝐢𝐯𝐢𝐭𝐲: Minor prompt tweaks yield different outputs, destabilizing reproducibility.
* 𝐄𝐦𝐞𝐫𝐠𝐞𝐧𝐭 𝐁𝐞𝐡𝐚𝐯𝐢𝐨𝐫𝐬: Unintended behaviors appear as models scale, making explanation and debugging unpredictable.
𝐁.- 𝐅𝐚𝐢𝐫𝐧𝐞𝐬𝐬 𝐁𝐚𝐫𝐫𝐢𝐞𝐫𝐬:
* 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐁𝐢𝐚𝐬: Models absorb societal bias from uncurated internet-scale data, amplifying discrimination risks.
* 𝐋𝐚𝐜𝐤 𝐨𝐟 𝐒𝐞𝐧𝐬𝐢𝐭𝐢𝐯𝐞 𝐀𝐭𝐭𝐫𝐢𝐛𝐮𝐭𝐞 𝐃𝐚𝐭𝐚: Limits proper disparate impact analysis and subgroup auditing.
* 𝐍𝐨 𝐆𝐫𝐨𝐮𝐧𝐝 𝐓𝐫𝐮𝐭𝐡 𝐟𝐨𝐫 𝐅𝐚𝐢𝐫𝐧𝐞𝐬𝐬: Open-ended outputs make “fairness” hard to define, let alone measure.
* 𝐁𝐢𝐚𝐬 𝐄𝐯𝐨𝐥𝐯𝐞𝐬: AI agents adapt post-deployment—biases can emerge over time, challenging longitudinal accountability.
𝐂.- 𝐂𝐫𝐨𝐬𝐬-𝐂𝐮𝐭𝐭𝐢𝐧𝐠 𝐃𝐢𝐥𝐞𝐦𝐦𝐚𝐬:
* Trade-offs exist between 𝐞𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐚𝐧𝐝 𝐟𝐚𝐢𝐫𝐧𝐞𝐬𝐬—enhancing one can reduce the other.
* No standard benchmarks = fragmented compliance pathways.
* Stochastic outputs break reproducibility and traceability.
𝐖𝐢𝐭𝐡 𝐤𝐞𝐲 𝐭𝐫𝐚𝐧𝐬𝐩𝐚𝐫𝐞𝐧𝐜𝐲 𝐫𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬 𝐛𝐞𝐜𝐨𝐦𝐢𝐧𝐠 𝐦𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐬𝐭𝐚𝐫𝐭𝐢𝐧𝐠 𝐢𝐧 𝐀𝐮𝐠𝐮𝐬𝐭 𝟐𝟎𝟐𝟓, we urgently need:
• New model designs with interpretability-by-default,
• Scalable bias mitigation techniques,
• Robust, standardized toolkits and benchmarks.
As we shift from research to regulation, engineering 𝐭𝐫𝐮𝐬𝐭𝐰𝐨𝐫𝐭𝐡𝐲 𝐀𝐈 isn’t just ethical—it’s mandatory.

Strategy to Capitalize on Generative AI in Business

Featured

The integration of Generative AI (GenAI) in businesses presents both challenges and opportunities. This article outlines strategies for deploying GenAI, ensuring compliance, managing risks, and facilitating monetization in a rapidly evolving technological environment.

A.- Understanding GenAI Challenges

Key obstacles to GenAI integration include:

  • Lack of incentives: Without apparent benefits, employees might resist new AI tools.
  • Ignorance of AI’s potential: Misunderstanding what AI can do often leads to its underuse.
  • Fear of job displacement: Concerns about AI replacing jobs or empowering junior employees can cause resistance.
  • Restrictive policies: Conservative approaches may stifle AI adoption, pushing employees to seek alternatives outside the organization.

B.- Strategic Integration of GenAI

  • Identify High-Value Applications: Target roles and processes where GenAI can boost efficiency, such as data analysis and customer service, ensuring immediate impact and wider acceptance.
  • Educate and Incentivize Employees: Develop training programs coupled with incentives to foster AI adoption and proficiency.
  • Risks and Contingency Planning: Assess and manage technological, regulatory, and organizational risks with proactive safeguards and strategic planning for potential issues.
  • Incremental Implementation: Start with pilot projects offering high returns, which can be expanded later, showcasing their effectiveness and ROI.

C.- Monetization Strategies

  • Enhance Productivity: Apply GenAI to automate routine tasks and enhance complex decision-making, freeing up resources for more strategic tasks, thereby reducing costs and improving output quality.
  • Develop New Products and Services: Utilize GenAI to create innovative products or enhance existing ones, opening up new revenue streams like AI-driven analytics services.
  • Improve Customer Engagement: Deploy GenAI tools like chatbots or personalized recommendation systems to boost customer interaction and satisfaction, potentially increasing retention and sales.
  • Optimize Resource Management: Use GenAI to predict demand trends, optimize supply chains, and manage resources efficiently, reducing waste and lowering operational costs.

D.- Conclusion

Successfully integrating and monetizing GenAI involves overcoming resistance, managing risks, and strategically deploying AI to boost productivity, drive innovation, and enhance customer engagement. By thoughtfully addressing these issues, companies can thrive in the era of rapid AI evolution.

The EU AI Act: An Overview

Featured

Set to take effect in stages starting summer 2024, the AI Act is poised to become the world’s first comprehensive AI law. It aims to govern the use and impact of AI technologies across the EU, affecting a broad range of stakeholders including AI providers, deployers, importers, and distributors.
🔹𝐊𝐞𝐲 𝐏𝐫𝐨𝐯𝐢𝐬𝐢𝐨𝐧𝐬 & 𝐈𝐦𝐩𝐚𝐜𝐭: The Act categorizes AI systems into prohibited, high-risk, and general-purpose models, each with specific compliance requirements. Notably, high-risk AI systems face stringent obligations, impacting sectors from employment to public services. The Act also introduces bans on certain AI practices deemed harmful, like emotion recognition in workplaces or untargeted image scraping for facial recognition.
🔹𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 & 𝐏𝐞𝐧𝐚𝐥𝐭𝐢𝐞𝐬: Compliance will vary by the nature of AI usage with penalties for non-compliance reaching up to €35 million or 7% of annual worldwide turnover. The AI Act also incorporates and aligns with existing EU regulations like GDPR, requiring businesses to assess both new and existing legal frameworks.
🔹𝐓𝐢𝐦𝐞𝐥𝐢𝐧𝐞 𝐟𝐨𝐫 𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧: The AI Act will phase in its provisions, with most obligations impacting businesses after a two-year period post-law enactment.
🔹𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐜 𝐂𝐨𝐧𝐬𝐢𝐝𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬: Entities involved in AI need to develop robust governance frameworks early to align with the Act’s requirements. As AI technologies and legal standards evolve, staying informed and adaptable is crucial.
🔹𝐆𝐥𝐨𝐛𝐚𝐥 𝐏𝐞𝐫𝐬𝐩𝐞𝐜𝐭𝐢𝐯𝐞: Unlike the EU’s comprehensive approach, the UK is currently opting for a non-binding, principles-based framework for AI regulation. This divergence highlights varying international stances on AI governance.
For businesses and professionals involved in AI, the incoming AI Act represents both a challenge and an opportunity to lead in responsible AI deployment and innovation.

More on: https://bit.ly/4bN8gM0

𝐹𝑜𝑙𝑙𝑜𝑤 𝑚𝑒 𝑜𝑛 𝑋: @𝑀𝑖𝑔𝑢𝑒𝑙𝐶ℎ𝑎𝑚𝑜𝑐ℎ𝑖𝑛

EU Sets Global Precedent with Comprehensive AI Act

Featured

The European Union has just reached a landmark agreement on a comprehensive AI law, poised to set a global precedent. This new regulation, known as the AI Act, is one of the first of its kind and aims to manage the rapidly evolving AI technology with a risk-based approach.

Key highlights of the AI Act include:

  • Risk-Based Regulation: The AI Act will categorize AI systems based on their level of risk, with the most stringent regulations applied to high-risk models. This includes popular large AI models like ChatGPT.
  • Enforcement Across EU: All 27 member states will be involved in enforcing the law, with certain aspects taking up to 24 months to become effective.
  • Global Impact: The legislation is expected to influence AI development worldwide, serving as a model for other countries.
  • Comprehensive Prohibitions: The AI Act will ban AI use for social scoring, manipulating human behavior, and exploiting vulnerable groups. Strict restrictions are also placed on facial recognition technology and AI systems in the workplace and educational institutions.
  • Significant Fines for Non-Compliance: Companies that fail to comply with these new rules could face fines of up to €35 million or 7% of global revenue.
  • Two-Tier Approach for AI Models: The Act establishes transparency requirements for general-purpose AI models and stronger requirements for those with systemic impacts.
  • Encouragement for Innovation: Despite strict regulations, the Act aims to avoid excessive burdens on companies, promoting a balance between safeguarding AI technology use and encouraging innovation.
  • Future Perspectives: Looking ahead, this legislation is a crucial step in shaping the global AI regulatory landscape, with implications for AI legislation and automated decision-making rules in other jurisdictions, including Canada, the United States, and beyond.

The EU AI Act is much more than just a set of rules; it’s a catalyst for EU startups and researchers to lead in the global AI race. With this act, the EU becomes the first continent to establish clear rules for AI use, potentially guiding future global standards in AI regulation.

More on: https://bit.ly/486f0n3

Embracing the Future: How Businesses Can Navigate the Risks and Regulations of Generative AI

In an era where technological advancements are not just rapid but revolutionary, generative AI stands at the forefront, redefining the boundaries of what’s possible. This makes important to understand and adapt to the risks and regulatory challenges posed by technologies like Generative AI.

Understanding the Landscape: Generative AI, with its ability to create content and automate processes, is a game-changer for businesses across various sectors. However, with great power comes great responsibility. It is important for business leaders to be well-versed in the potential risks associated with these technologies. From data privacy concerns to ethical implications, the landscape is complex and ever-evolving. As these AI models become more integrated into business operations, understanding their legal and ethical dimensions becomes paramount.

Navigating the risks associated with generative AI: this involves a multifaceted approach. Here are key strategies a company can adopt:

  • Stay Informed and Educate Teams: Continuously educate yourself and your team about the latest developments in generative AI. Understanding the capabilities and limitations of these technologies is crucial. Regular training and workshops can help employees stay abreast of new developments and understand the ethical and legal implications of AI.
  • Develop Robust Policies and Guidelines: Create clear policies and guidelines for using generative AI. These should cover areas like data privacy, ethical use of AI, and compliance with relevant laws and regulations. Ensure these policies are regularly updated to reflect the evolving nature of AI technology and regulatory landscapes.
  • Implement Strong Data Governance: Since generative AI often relies on large datasets, it’s vital to have strong data governance policies in place. This includes ensuring data privacy, securing data against breaches, and complying with data protection regulations like GDPR or CCPA.
  • Risk Assessment and Management: Conduct regular risk assessments to identify potential risks associated with the use of generative AI. This should include evaluating the impact of AI decisions and outputs on various stakeholders, including customers, employees, and the broader community.
  • Ethical AI Framework: Develop an ethical framework for AI use that aligns with your company’s values and ethical standards. This includes ensuring fairness, transparency, and accountability in AI systems.
  • Engage with Legal and Compliance Teams: Work closely with legal and compliance teams to understand the regulatory environment and ensure that your use of AI is compliant with all relevant laws and regulations.
  • Collaborate with External Experts: Collaborate with external experts, including AI ethicists, legal experts, and industry peers, to gain diverse perspectives and stay informed about best practices in AI usage.
  • Monitor AI Performance and Impact: Continuously monitor the performance of AI systems to ensure they are working as intended and not producing biased or unfair outcomes. Be prepared to modify or discontinue the use of AI systems that do not meet ethical or performance standards.
  • Transparency and Accountability: Be transparent with stakeholders about how AI is being used in your business. This includes being open about the capabilities of AI systems and any limitations or risks associated with their use.
  • Prepare for Future Regulations: Anticipate future changes in the regulatory landscape and be prepared to adapt your AI strategies accordingly. This proactive approach can help avoid compliance issues and maintain a competitive edge.

By implementing these strategies, companies can better navigate the risks associated with generative AI and leverage its benefits responsibly and ethically.

Conclusion: The message is clear: the time to act is now. Businesses cannot afford to be passive consumers of generative AI technology. Instead, they must be active participants in shaping its use within their operations. By developing informed policies and staying ahead of regulatory curves, businesses can harness the full potential of generative AI while mitigating its risks. This proactive approach is not just a safeguard but a strategic advantage in the rapidly evolving digital world. As we step into the future, embracing and shaping the landscape of generative AI becomes a key determinant of success for businesses worldwide.

EU Taxonomy: MEPs do not object to inclusion of gas and nuclear activities.

EU missed opportunity to show global leadership on climate change with a robust and science-based taxonomy that underpins a credible pathway to net zero. This will undermine the EU’s climate neutrality target by 2050.

The taxonomy is a voluntary instrument to guide financial sector toward investment that allow us to reach our climate goals, which it is a de facto the key driven force. We are talking about what is the guide for the future on what is sustainable.!

Europe’s energy shortages have underscored the challenges of phasing out fossil fuels & nuclear power, and of relying on renewable supplies and power storage. Gas is seen as a way of helping to wean poorer EU countries like Poland off coal, which pollutes much more. France have touted nuclear as a low-carbon energy source crucial for the replacement of Russian fossil fuels. Excluding these energies sources from the taxonomy could be “particularly challenging” for Ukraine’s post-war reconstruction. Germany has expressed its rejection of the inclusion of nuclear energy and its dependency on gas. This decision could benefit Russia and perpetuate European reliance on its gas supplies.

It’s completely clear that both nuclear energy, and fossil gas have nothing to do with sustainability. This denotes the supremacy of lobby groups and countries’ energy policy over the scientific rationale.!!!

More on: https://bit.ly/3anhVyY

The European Central Bank takes further steps to incorporate climate change into its monetary policy operations.

The ECB will account for climate change in its corporate bond purchases, collateral framework, disclosure requirements and risk management, in line with its climate action plan.

The Eurosystem aims to gradually decarbonise its corporate bond holdings, on a path aligned with the goals of the Paris Agreement. It will limit the share of assets issued by entities with a high carbon footprint that can be pledged as collateral by individual counterparties when borrowing from the Eurosystem. It will only accept marketable assets and credit claims from companies and debtors that comply with the Corporate Sustainability Reporting Directive (CSRD) as collateral in Eurosystem credit operations. The Eurosystem will further enhance its risk assessment tools and capabilities to better include climate-related risks.

These measures aim to reduce financial risk related to climate change on the Eurosystem’s balance sheet, encourage transparency, and support the green transition of the economy.

Looking ahead, the Governing Council is committed to regularly reviewing that they are fit for purpose and aligned with the objectives of the Paris Agreement and the EU’s climate neutrality objectives.

More on https://bit.ly/3P5jaRU