Beyond Compliance: How Dora Is Reshaping Financial Resilience into Competitive Advantage

Four months into full applicability, the Digital Operational Resilience Act (DORA) is proving more complex than anticipated. Financial institutions are navigating a fast-evolving regulatory landscape shaped by fragmented supervisory readiness, expanding technical requirements, and increasing market expectations.

Key takeaways:
* DORA is not a one-off checklist—it’s a multi-phase transformation touching governance, third-party risk, cyber resilience, and operational continuity.
* Mapping critical processes and ICT dependencies is now foundational.
* Third-party risk management must go beyond tick-box audits—dynamic oversight and contract readiness with cloud providers are essential.
* Operational resilience testing—including Threat-Led Penetration Testing (TLPT)—requires new levels of maturity and coordination.
* Compliance must shift from paper to practice—through automation, testing, and real-world response capabilities.

Strategic priorities for 2025–2026:
* Focus on business-critical ICT dependencies
* Strengthen third-party risk management
* Engage proactively with regulators
* Operationalise continuous compliance

Institutions that embed resilience—not just demonstrate compliance—will gain long-term advantage.

AI’s Black Box Nightmare: How EU IA Act Are Exposing the Dark Side of GenAI and LLM architectures

With the EU AI Act entering into force, two of the most 𝐜𝐫𝐢𝐭𝐢𝐜𝐚𝐥 𝐫𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬 for high-risk and general-purpose AI systems (GPAI) are 𝐄𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐢𝐥𝐢𝐭𝐲 and 𝐅𝐚𝐢𝐫𝐧𝐞𝐬𝐬. But current GenAI and LLM architectures are fundamentally at odds with these goals.
𝐀.- 𝐄𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐛𝐚𝐫𝐫𝐢𝐞𝐫𝐬:
* 𝐎𝐩𝐚𝐪𝐮𝐞 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞𝐬: LLMs like GPT or LLaMA operate as high-dimensional black boxes—tracing a specific output to an input is non-trivial.
* 𝐏𝐨𝐬𝐭-𝐡𝐨𝐜 𝐈𝐧𝐭𝐞𝐫𝐩𝐫𝐞𝐭𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐋𝐢𝐦𝐢𝐭𝐬: Tools like SHAP or LIME offer correlation, not causality—often falling short of legal standards.
* 𝐏𝐫𝐨𝐦𝐩𝐭 𝐒𝐞𝐧𝐬𝐢𝐭𝐢𝐯𝐢𝐭𝐲: Minor prompt tweaks yield different outputs, destabilizing reproducibility.
* 𝐄𝐦𝐞𝐫𝐠𝐞𝐧𝐭 𝐁𝐞𝐡𝐚𝐯𝐢𝐨𝐫𝐬: Unintended behaviors appear as models scale, making explanation and debugging unpredictable.
𝐁.- 𝐅𝐚𝐢𝐫𝐧𝐞𝐬𝐬 𝐁𝐚𝐫𝐫𝐢𝐞𝐫𝐬:
* 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐁𝐢𝐚𝐬: Models absorb societal bias from uncurated internet-scale data, amplifying discrimination risks.
* 𝐋𝐚𝐜𝐤 𝐨𝐟 𝐒𝐞𝐧𝐬𝐢𝐭𝐢𝐯𝐞 𝐀𝐭𝐭𝐫𝐢𝐛𝐮𝐭𝐞 𝐃𝐚𝐭𝐚: Limits proper disparate impact analysis and subgroup auditing.
* 𝐍𝐨 𝐆𝐫𝐨𝐮𝐧𝐝 𝐓𝐫𝐮𝐭𝐡 𝐟𝐨𝐫 𝐅𝐚𝐢𝐫𝐧𝐞𝐬𝐬: Open-ended outputs make “fairness” hard to define, let alone measure.
* 𝐁𝐢𝐚𝐬 𝐄𝐯𝐨𝐥𝐯𝐞𝐬: AI agents adapt post-deployment—biases can emerge over time, challenging longitudinal accountability.
𝐂.- 𝐂𝐫𝐨𝐬𝐬-𝐂𝐮𝐭𝐭𝐢𝐧𝐠 𝐃𝐢𝐥𝐞𝐦𝐦𝐚𝐬:
* Trade-offs exist between 𝐞𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐚𝐧𝐝 𝐟𝐚𝐢𝐫𝐧𝐞𝐬𝐬—enhancing one can reduce the other.
* No standard benchmarks = fragmented compliance pathways.
* Stochastic outputs break reproducibility and traceability.
𝐖𝐢𝐭𝐡 𝐤𝐞𝐲 𝐭𝐫𝐚𝐧𝐬𝐩𝐚𝐫𝐞𝐧𝐜𝐲 𝐫𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬 𝐛𝐞𝐜𝐨𝐦𝐢𝐧𝐠 𝐦𝐚𝐧𝐝𝐚𝐭𝐨𝐫𝐲 𝐬𝐭𝐚𝐫𝐭𝐢𝐧𝐠 𝐢𝐧 𝐀𝐮𝐠𝐮𝐬𝐭 𝟐𝟎𝟐𝟓, we urgently need:
• New model designs with interpretability-by-default,
• Scalable bias mitigation techniques,
• Robust, standardized toolkits and benchmarks.
As we shift from research to regulation, engineering 𝐭𝐫𝐮𝐬𝐭𝐰𝐨𝐫𝐭𝐡𝐲 𝐀𝐈 isn’t just ethical—it’s mandatory.

Strategy to Capitalize on Generative AI in Business

Featured

The integration of Generative AI (GenAI) in businesses presents both challenges and opportunities. This article outlines strategies for deploying GenAI, ensuring compliance, managing risks, and facilitating monetization in a rapidly evolving technological environment.

A.- Understanding GenAI Challenges

Key obstacles to GenAI integration include:

  • Lack of incentives: Without apparent benefits, employees might resist new AI tools.
  • Ignorance of AI’s potential: Misunderstanding what AI can do often leads to its underuse.
  • Fear of job displacement: Concerns about AI replacing jobs or empowering junior employees can cause resistance.
  • Restrictive policies: Conservative approaches may stifle AI adoption, pushing employees to seek alternatives outside the organization.

B.- Strategic Integration of GenAI

  • Identify High-Value Applications: Target roles and processes where GenAI can boost efficiency, such as data analysis and customer service, ensuring immediate impact and wider acceptance.
  • Educate and Incentivize Employees: Develop training programs coupled with incentives to foster AI adoption and proficiency.
  • Risks and Contingency Planning: Assess and manage technological, regulatory, and organizational risks with proactive safeguards and strategic planning for potential issues.
  • Incremental Implementation: Start with pilot projects offering high returns, which can be expanded later, showcasing their effectiveness and ROI.

C.- Monetization Strategies

  • Enhance Productivity: Apply GenAI to automate routine tasks and enhance complex decision-making, freeing up resources for more strategic tasks, thereby reducing costs and improving output quality.
  • Develop New Products and Services: Utilize GenAI to create innovative products or enhance existing ones, opening up new revenue streams like AI-driven analytics services.
  • Improve Customer Engagement: Deploy GenAI tools like chatbots or personalized recommendation systems to boost customer interaction and satisfaction, potentially increasing retention and sales.
  • Optimize Resource Management: Use GenAI to predict demand trends, optimize supply chains, and manage resources efficiently, reducing waste and lowering operational costs.

D.- Conclusion

Successfully integrating and monetizing GenAI involves overcoming resistance, managing risks, and strategically deploying AI to boost productivity, drive innovation, and enhance customer engagement. By thoughtfully addressing these issues, companies can thrive in the era of rapid AI evolution.

The EU AI Act: An Overview

Featured

Set to take effect in stages starting summer 2024, the AI Act is poised to become the world’s first comprehensive AI law. It aims to govern the use and impact of AI technologies across the EU, affecting a broad range of stakeholders including AI providers, deployers, importers, and distributors.
🔹𝐊𝐞𝐲 𝐏𝐫𝐨𝐯𝐢𝐬𝐢𝐨𝐧𝐬 & 𝐈𝐦𝐩𝐚𝐜𝐭: The Act categorizes AI systems into prohibited, high-risk, and general-purpose models, each with specific compliance requirements. Notably, high-risk AI systems face stringent obligations, impacting sectors from employment to public services. The Act also introduces bans on certain AI practices deemed harmful, like emotion recognition in workplaces or untargeted image scraping for facial recognition.
🔹𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 & 𝐏𝐞𝐧𝐚𝐥𝐭𝐢𝐞𝐬: Compliance will vary by the nature of AI usage with penalties for non-compliance reaching up to €35 million or 7% of annual worldwide turnover. The AI Act also incorporates and aligns with existing EU regulations like GDPR, requiring businesses to assess both new and existing legal frameworks.
🔹𝐓𝐢𝐦𝐞𝐥𝐢𝐧𝐞 𝐟𝐨𝐫 𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧: The AI Act will phase in its provisions, with most obligations impacting businesses after a two-year period post-law enactment.
🔹𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐜 𝐂𝐨𝐧𝐬𝐢𝐝𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬: Entities involved in AI need to develop robust governance frameworks early to align with the Act’s requirements. As AI technologies and legal standards evolve, staying informed and adaptable is crucial.
🔹𝐆𝐥𝐨𝐛𝐚𝐥 𝐏𝐞𝐫𝐬𝐩𝐞𝐜𝐭𝐢𝐯𝐞: Unlike the EU’s comprehensive approach, the UK is currently opting for a non-binding, principles-based framework for AI regulation. This divergence highlights varying international stances on AI governance.
For businesses and professionals involved in AI, the incoming AI Act represents both a challenge and an opportunity to lead in responsible AI deployment and innovation.

More on: https://bit.ly/4bN8gM0

𝐹𝑜𝑙𝑙𝑜𝑤 𝑚𝑒 𝑜𝑛 𝑋: @𝑀𝑖𝑔𝑢𝑒𝑙𝐶ℎ𝑎𝑚𝑜𝑐ℎ𝑖𝑛

EU Sets Global Precedent with Comprehensive AI Act

Featured

The European Union has just reached a landmark agreement on a comprehensive AI law, poised to set a global precedent. This new regulation, known as the AI Act, is one of the first of its kind and aims to manage the rapidly evolving AI technology with a risk-based approach.

Key highlights of the AI Act include:

  • Risk-Based Regulation: The AI Act will categorize AI systems based on their level of risk, with the most stringent regulations applied to high-risk models. This includes popular large AI models like ChatGPT.
  • Enforcement Across EU: All 27 member states will be involved in enforcing the law, with certain aspects taking up to 24 months to become effective.
  • Global Impact: The legislation is expected to influence AI development worldwide, serving as a model for other countries.
  • Comprehensive Prohibitions: The AI Act will ban AI use for social scoring, manipulating human behavior, and exploiting vulnerable groups. Strict restrictions are also placed on facial recognition technology and AI systems in the workplace and educational institutions.
  • Significant Fines for Non-Compliance: Companies that fail to comply with these new rules could face fines of up to €35 million or 7% of global revenue.
  • Two-Tier Approach for AI Models: The Act establishes transparency requirements for general-purpose AI models and stronger requirements for those with systemic impacts.
  • Encouragement for Innovation: Despite strict regulations, the Act aims to avoid excessive burdens on companies, promoting a balance between safeguarding AI technology use and encouraging innovation.
  • Future Perspectives: Looking ahead, this legislation is a crucial step in shaping the global AI regulatory landscape, with implications for AI legislation and automated decision-making rules in other jurisdictions, including Canada, the United States, and beyond.

The EU AI Act is much more than just a set of rules; it’s a catalyst for EU startups and researchers to lead in the global AI race. With this act, the EU becomes the first continent to establish clear rules for AI use, potentially guiding future global standards in AI regulation.

More on: https://bit.ly/486f0n3

The EU Artificial Intelligence Act (“AI Act”)

It establishes rules for the development, commodification and use of AI-driven products, services, and systems. On 21 April 2021 was published the first draft. The aim of the Act is to have “measures in support of innovation” including the use of AI regulatory sandboxes. Scientific research falls outside the parameters of the Act. General purpose AI systems (image or speech recognition, audio or video generation, pattern detection, question answering, and translation) should not be considered within scope.

The Act takes a risk-based approach categorising all AI into three risky of activities: unacceptable (social scoring), high-risk (medical devices and consumer creditworthiness), and low-risk activities.

The premise behind social scoring is that an AI system would assign a starting score to every individual, which would increase or decrease depending on certain actions or behaviours. This could not be necessarily relevant or fair depending on the variables of the model  (e.g. gender could generate “financial exclusion and discrimination”). The Act draws a distinction between social scoring and “lawful evaluation practices of natural persons” – permitting the latter. In turn, the processing of an individual’s financial information to ascertain their eligibility for insurance policies may be permitted albeit this deserves special consideration and is high risk.

On 29 November 2021, it was published a Compromise Text, providing further details of the obligations providers of high-risk AI systems must adhere to. Its Annex III outlines eight areas considered to be high risk: biometric systems; critical infrastructure and protection of the environment; education and vocational training; employment, workers management and access to self-employment; access to and enjoyment of private services and public services and benefits; law enforcement; migration, asylum and border control management; and administration of justice and democratic processes.

The draft Act currently includes an obligation that high-risk AI systems have data sets which are ‘free of errors’ but it has been questioned whether that is possible. As a result, the EU Committee on Industry, Research and Energy has proposed recently to amend some of the standards to what they consider more realistic: “High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, assessment, validation and testing data sets considering the latest state-of-the-art measures, according to the specific market segment or scope of applicationUnsupervised learning and reinforcement learning shall be developed on the basis of training data sets that meet the quality criteria referred to in paragraphs 2 to 5…. Providers of high-risk AI systems that utilise data collected and/or managed by third parties may rely on representations from those third parties with regard to quality criteria referred to in paragraph 2, points (a), (b) and (c)…. Training, validation and testing data sets are designed with the best possible efforts to ensure that they are relevant, representative, and appropriately vetted for errors in view of the intended purpose of the AI system. In particular They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used.

The Act envisages providers of high-risk AI systems who place that AI system into the EU market will register that AI system on the EU database referred to in Article 60.

Finally, questions remain as to when the Act will finally be adopted and apply to organisations. The GDPR was proposed in 2012 and only finally came into force in 2018.

More on https://bit.ly/3a2T2Zm

The EU Digital Law

This June, it was approved the last pending initiative of those related to updating the rules that govern digital services in the EU: the 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀 𝗟𝗮𝘄 (𝗗𝗦𝗔) and the 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗮𝗿𝗸𝗲𝘁𝘀 𝗟𝗮𝘄 (𝗗𝗠𝗔). Another important legislation is the 𝗘𝗨 𝗔𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝗔𝗰𝘁 (“𝗔𝗜 𝗔𝗰𝘁”) which is being amended, thus it is pending the very soon approval to encourage innovation and refine the definition of AI.

𝗧𝗵𝗲 𝗗𝗦𝗔 𝗮𝗻𝗱 𝗗𝗠𝗔 𝗵𝗮𝘃𝗲 𝘁𝘄𝗼 𝗺𝗮𝗶𝗻 𝗼𝗯𝗷𝗲𝗰𝘁𝗶𝘃𝗲𝘀: to create a safer digital space in which the fundamental rights of all users of digital services are protected; and establish a level playing field to foster innovation, growth and competitiveness. As for the DSA, new guidelines are established in terms of liability for digital service providers, until now framed, in a more lax way, in the capital Directive 2000/31/EC, on electronic commerce.

The DMA establishes a series of narrowly defined objective criteria to classify certain online platforms as “gatekeepers”. Pending final approval by the European Parliament and the Council, the final text is expected to be adopted between September and October 2022. 𝗕𝗮𝘀𝗶𝗰𝗮𝗹𝗹𝘆, 𝗶𝘁 𝗶𝘀 𝘁𝗵𝗲 𝗹𝗮𝗿𝗴𝗲𝘀𝘁 𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗽𝗿𝗼𝗽𝗼𝘀𝗲𝗱 𝘁𝗼 𝗱𝗮𝘁𝗲 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗘𝗨 𝗮𝗴𝗮𝗶𝗻𝘀𝘁 𝘁𝗵𝗲 𝗱𝗼𝗺𝗶𝗻𝗮𝗻𝘁 𝗽𝗼𝘀𝗶𝘁𝗶𝗼𝗻 𝗶𝗻 𝘁𝗵𝗲𝗶𝗿 𝗿𝗲𝘀𝗽𝗲𝗰𝘁𝗶𝘃𝗲 𝘀𝘂𝗯𝘀𝗲𝗰𝘁𝗼𝗿𝘀 𝗼𝗳 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝗮𝗴𝗲𝗻𝘁𝘀 𝘀𝘂𝗰𝗵 𝗮𝘀 𝘁𝗵𝗲 𝘀𝗼-𝗰𝗮𝗹𝗹𝗲𝗱 𝗚𝗔𝗙𝗔𝗠 (Google, Apple, Facebook, Amazon and Microsoft).

More on: https://bit.ly/3OTtLQ4 https://bit.ly/2RYGUiH

𝗗𝗮𝘁𝗮 𝗶𝘀 𝘁𝗵𝗲 𝗳𝘂𝗲𝗹 𝗼𝗳 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝘁𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻.

𝗔𝗜, 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗧𝘄𝗶𝗻𝘀, 𝗢𝗽𝗲𝗻 𝗙𝗶𝗻𝗮𝗻𝗰𝗲, 𝗦𝗺𝗮𝗿𝘁 𝗗𝗮𝘁𝗮, 𝗜𝗻𝘁𝗲𝗿𝗻𝗲𝘁 𝗼𝗳 𝗧𝗵𝗶𝗻𝗴𝘀, 𝗜𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝟰.𝟬, 𝗪𝗲𝗯 𝟯.𝟬 𝗼𝗿 𝘁𝗵𝗲 𝗠𝗲𝘁𝗮𝘃𝗲𝗿𝘀𝗲 mega-trends depend on data.

A complex 𝗻𝗲𝘄 𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 for data is emerging. The 𝗘𝗨 𝗚𝗲𝗻𝗲𝗿𝗮𝗹 𝗗𝗮𝘁𝗮 𝗣𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗼𝗻 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗶𝗼𝗻 (𝗚𝗗𝗣𝗥) was drafted and passed, imposing obligations onto organizations anywhere, when targeting or collecting data related to people in the EU. Furthermore, new data- focused EU laws like 𝘁𝗵𝗲 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗮𝗿𝗸𝗲𝘁𝘀 𝗔𝗰𝘁 (𝗗𝗠𝗔), 𝘁𝗵𝗲 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀 𝗔𝗰𝘁 (𝗗𝗦𝗔), 𝘁𝗵𝗲 𝗖𝘆𝗯𝗲𝗿 𝗥𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝗰𝗲 𝗔𝗰𝘁 (𝗖𝗥𝗔), 𝘁𝗵𝗲 𝗗𝗮𝘁𝗮 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗔𝗰𝘁𝘀 (𝗗𝗚𝗔), 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗔𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝗔𝗰𝘁 (𝗔𝗜 𝗔𝗰𝘁) are now also set to come on-stream in the next few years. 𝗗𝗮𝘁𝗮 𝘀𝗰𝗶𝗲𝗻𝗰𝗲 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 𝗲𝘅𝗶𝘀𝘁𝗶𝗻𝗴 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗺𝗼𝗱𝗲𝗹𝘀, supporting the creation of disruptive data-driven business models. The implementation of such 𝗻𝗲𝘄 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗺𝗼𝗱𝗲𝗹𝘀 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝘀 𝘁𝗵𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 𝗼𝗳 𝗻𝗲𝘄 𝗲𝘁𝗵𝗶𝗰𝗮𝗹, 𝘀𝘂𝘀𝘁𝗮𝗶𝗻𝗮𝗯𝗹𝗲, 𝗮𝗻𝗱 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗹𝗲𝗴𝗮𝗹 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 𝗳𝗼𝗿 𝘂𝘀𝗶𝗻𝗴 𝗱𝗮𝘁𝗮, 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗮𝗱𝗼𝗽𝘁𝗶𝗼𝗻 𝗼𝗳 𝘀𝘂𝗰𝗵 𝗿𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗮𝗻𝗱 𝗹𝗲𝗴𝗮𝗹 𝗮𝗰𝘁𝗶𝗼𝗻 𝘁𝗼 𝗱𝗿𝗶𝘃𝗲 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗮𝗱𝘃𝗮𝗻𝘁𝗮𝗴𝗲. However, the 𝗮𝗽𝗽𝗿𝗼𝗽𝗿𝗶𝗮𝘁𝗲 𝗯𝗮𝗹𝗮𝗻𝗰𝗲 𝗯𝗲𝘁𝘄𝗲𝗲𝗻 𝗳𝗿𝗲𝗲𝗱𝗼𝗺 𝗮𝗻𝗱 𝘀𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗶𝗼𝗻 𝗶𝘀 𝗲𝘅𝗽𝗲𝗰𝘁𝗲𝗱 𝘁𝗼 𝗿𝗲𝗺𝗮𝗶𝗻 𝗮 𝗰𝗼𝗻𝘁𝗿𝗼𝘃𝗲𝗿𝘀𝗶𝗮𝗹 issue during the whole legislative process. The policies emerging from 𝗘𝘂𝗿𝗼𝗽𝗲’𝘀 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝗮𝗴𝗲𝗻𝗱𝗮 𝗳𝗼𝗿 𝟮𝟬𝟮𝟮 𝘄𝗶𝗹𝗹 𝗵𝗮𝘃𝗲 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗰𝗼𝗻𝘀𝗲𝗾𝘂𝗲𝗻𝗰𝗲𝘀 𝗳𝗼𝗿 𝗘𝘂𝗿𝗼𝗽𝗲’𝘀 𝗽𝗹𝗮𝗰𝗲 𝗶𝗻 𝘁𝗵𝗲 𝘄𝗼𝗿𝗹𝗱 𝗮𝗻𝗱 𝗶𝘁𝘀 𝗶𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗽𝗮𝗿𝘁𝗻𝗲𝗿𝘀𝗵𝗶𝗽𝘀.