Last week Italy became the first country in the EU to pass comprehensive legislation regulating the use of AI. The law is radical in nature, limiting access by age and imposing potential prison terms for harmful use. Giorgia Meloni, the prime minister, specifically said that the aim was to promote “transparent and safe AI use.”
Some have jumped at the opportunity to criticise the legislation, and especially the €1 billion of funding Meloni’s government has committed to invest in AI development alongside it. Compared to other state and even private investments, sometimes totalling hundreds of billions of dollars, they say that Italy’s investment is insignificant.
But to focus on the size of Italy’s monetary investment is to ignore the bigger picture. This new state legislation, likely to be the first of many across Europe, is a defining moment for the emergence of AI in Europe. It marks the first step towards a more transparent, equitable and innovative future for the technology across the world.
The legislation’s provisions
One of the main provisions of the legislation is its protection of consumers. Under the new laws, children under the age of 14 won’t be able to access services that use AI without parental consent. Penalties for misuse, such as the production of harmful deepfakes or fraud, could be as significant as a five-year prison sentence.
Addressing the development and use of AI in the corporate sector, the new legislation determines that all AI systems must remain traceable and subject to human oversight. This extends across all sectors that deploy AI, making it absolutely essential for organisations to track and understand what their programmes are doing. Human operators must maintain ultimate control over all automated processes.
This means different things in different sectors. In finance, for example, banks and lenders are testing AI’s ability to detect fraud, assess creditworthiness, and guide investment strategies – but they will remain responsible for all final decisions, and must clearly inform clients when AI has influenced outcomes. In recruitment, it will be crucial for employers to explain how and when AI is influencing their decisions to employ or decline applicants. Across all domains, transparency and accountability will be paramount.
What this means for AI development, adoption, and use
This legislation sends a clear message: in Italy, reckless, unchecked development that does not protect citizens will no longer be tolerated. Its funding shows that innovation will be supported, but only when achieved responsibly. It’s in line with the EU’s existing AI Act, and we can expect other states across the continent to pass similar legislation in the coming years.
For businesses and investors, it’s a stark sign of what’s to come. While speed of development has won out so far, we’re now entering a new era in which reliability, safety, and control will define the companies that exploit AI to their advantage. Investing in AI that stakeholders can trust is becoming just as important as investment at scale – a specific lesson for investors involved in much larger deals, like the £150 billion commitment the US recently made to the UK.
Current models that have, until now, received the most attention unfortunately fall short in this respect. Many had predicted that by 2025 companies across the spectrum would be using LLMs to propel their businesses forward, but BCG research shows that its use by frontline employees has stalled at 51% and leaders are struggling to justify ROI. This is especially true in regulated sectors like law and finance where accuracy and explainability are tantamount to success.
One of the biggest barriers to their adoption is that they’re opaque by nature; they generate predictions and decisions based on vast amounts of data but provide little insight into how they’ve reached them. The black box nature of their decision making is in fact an essential feature of the models themselves; they are prediction machines rather than truth seekers. In other words, businesses need another kind of AI if they are to use it to optimise their processes at scale and in line with new legislation like Italy’s.
The solution with neurosymbolic models
The solution to this problem lies in a model that prioritises explainability from the ground up. Neurosymbolic AI, an approach that combines deep learning with rule-based symbolic reasoning, offers one way of doing this. Unlike traditional LLMs, neurosymbolic AI structures its understanding in a deterministic and interpretable manner by transforming data into symbolic representations that outline their meaning in structured, machine-readable formats.
The neurosymbolic AI that we’ve developed here at UnlikelyAI, for example, can take a document as an input, analyse it, and then translate it into our proprietary symbolic representation. The model can then represent the document’s meaning explicitly, instead of inferring it based on statistical patterns alone. An agent then validates that logic using a set of ‘training scenarios’ designed to ensure accuracy, capable of delivering precise, explainable decisions for millions of scenarios, whether they’re insurance policies, loan applications, or financial checks.
This ability to generate structured, traceable explanations will be critical as new laws around AI continue to emerge around the world. If an AI system denies a job application, for example, it will need to be able to pinpoint the exact reasons for its decision. Neurosymbolic AI ensures this by maintaining a logical and reproducible reasoning path, bridging the gap between technical complexity and human understanding.
An end to AI’s ‘Wild West’ era in Europe
Italy’s new legislation is of an importance far greater than the size of its investment or discrepancies with minor aspects of its language. In essence, it is heralding the beginning of the end for AI’s ‘Wild West’ era in Europe. The effects will likely ripple outwards to other parts of the world, too.
Organisations that want to benefit from AI will now be held to much higher standards, and it will become quickly evident that many existing approaches are insufficient. Instead, they’ll need to adopt other models that prioritise safety, transparency, and explainability. Neurosymbolic AI is one of them, and through it, all stakeholders will be able to access the benefits of AI without compromising innovation or performance.


