“On artificial intelligence, trust is a right, not a commodity to be demanded,” said Margrethe Vestager, the European Commission’s executive vice-president and supervisor of digital policy, recently. The announcement, made on Wednesday 21 April, concerns the official proposal for new regulations related to the use of Artificial Intelligence (AI) in government, judiciary, and industry to ensure accountability and the protection of individual citizens. This announcement comes on top of a series of previous initiatives and efforts by the European Union, unique in the world, to define a legal and ethical framework for the deployment of data. This strategy already includes the now-famous General Data Protection Regulation (GDPR) of 2016 and the new guidelines for the ethical use of AI in 2019.
The relationship between data and privacy is a pervasive topic today: the catalyst of which is the continuous technological growth of Industry 4.0 and the centralisation of IT and economic capital in the form of data (“Datafication”) by Big Tech companies, the most famous example of which is perhaps represented by FAANG (Facebook, Amazon, Apple, Netflix, Google). This relationship is evidenced above all by the risks associated with the acquisition, use, and sale of data outside of public approval and accessibility. There is no shortage of scandals or examples of socio-economic discrimination based on predictive profiling by AI models: from the unauthorised sale of private social data; to criminal policy; to the provision of financial services and automated recruitment processes. The implications of this opaque centralisation of potential AI can be seen even more strongly in the much more drastic and controversial social credit program being implemented by the government of the People’s Republic of China.
This reflects an appetite for large amounts of data (‘Big Data’) and, diametrically, the difficulty in ensuring their quality – both in the annotation of information and the validation of the impartiality of representative sources and samples. AI algorithms, as well as the data from which they learn, may present similar difficulties: this may result, unwittingly, in the mistuning of hyper-parameters and in the amplification of harmful data features within models that are difficult to investigate, such as those of Deep Neural Networks (defined as ‘Black-boxes’). To ensure transparency, research on the explicability (“Explainable AI” or XAI) of AI models has become a major area of focus in research papers and frameworks. The proposed models focus on tabular, textual, image, or hybrid data; methodologies can be centred on the direct explanation of simple statistical formulae or, more often, on interpretations of internal functions through post-hoc explanations. The latter are in turn subdivided into model ‘agnostic’ if modular or, otherwise, model ‘specific’. Moreover, a certain granularity is introduced on a global or local scale if the classification of certain features is required. Generalizing, the XAI models are additional integrations of the AI models that aim at tracing and quantifying parameters importance through techniques of imitation, perturbation and/or classification techniques. Also, they represent how the latters have contributed to an X output, providing simple and/or causal explanations with graphical and/or textual outputs.
It is also easy to see how the XAI techniques can favour the interpretability of AI models in their practical applications, bringing the user to the centre of their design (“Human-centred AI”). With this, programmers will have access to more information for troubleshooting and code maintenance, managers responsible for the development of a data pipeline will be able to quantify the impact of certain variables, and legislators will be able to establish the integrity of data and algorithms. Ordinary citizens will benefit from the right (GDPR) to an explanation of the outcome of their request, which will be automatically processed by a software program whenever they are profiled. Explicability thus aims to reduce the potentially harmful implications of opaque AI data and models, prioritising the principles of fairness, accountability, and transparency. However, XAI does not justify the environment around the data and the model, i.e. the ecosystem that defines its properties and uses.
It is no coincidence that, in order to really enhance this democratisation of data, a growing interest is now forming around ‘decentralised AI’ solutions. These are databases and AI infrastructures that can be redistributed and are scalable, easily accessible, and that guarantee traceability and robust data and algorithms in spite of potential tampering (‘data forgery’) or adversarial attacks. Among these solutions, blockchain architectures are ideal, as data cannot be tampered with and retains its integrity. The consolidation of AI and blockchain can create a secure, immutable, and decentralised system for the highly sensitive information that AI-driven systems need to collect, store, and use. Examples of these solutions can be seen in new decentralised data marketplaces such as SingularityNet, IOTA, and OceanMature, which are a response to the transparent use of big data and AI models. On these marketplaces, creators of data assets can be repaid for their contributions without binding to private third parties. The data supply chain becomes traceable, offering both raw and clean data products: this translates into a reduction of costs related to their preparation (‘data cleaning’). It also represents a greater availability of collective information that would have otherwise been trapped in a set of individuals and/or companies if analysed individually (‘decentralised federated learning’). In particular, the development of comprehensive cybersecurity solutions lies in the redistributive and scalable nature of computational power through protocols shared by every AI product in the network (literally translated as ‘blockchain-based cloud computing’).
To summarise the benefits of blockchain for AI in a decentralised system, this combination would bring greater security in data sharing and traceability, greater confidence and transparency for AI operations and applications, and greater computational efficiency and collective decision-making through decentralised infrastructures and consensus protocols. These benefits could also be mutual, i.e. through the use of AI technologies aimed at optimising and automating blockchain systems for their security and performance. AI recommendation models could operate on the individual local node to provide customised user experiences while preserving privacy and the GDPR Right to be Forgotten. Similarly, on the security side, AI swarm intelligence models could optimise IDS (‘intrusion detection system’) and IPS (‘intrusion prevention system’), while soft computing models could optimise cryptographic hash functions. On the performance side, the application of AI with federated learning algorithms could reduce some of the scalability problems related to transaction confirmation costs by distributing mining data over multiple blocks. Moreover, it is not hard to see how an AI-supported blockchain system could contribute to energy planning by optimising how to allocate and maximise resources. In the near future, new and increasingly complex decentralised architectures between blockchain systems and AI technologies will be able to exploit these advantages, opening up unprecedented prospects for their security and performance.
There are already some studies that propose conceptual frameworks for decentralised applications (‘DApps’) where XAI can foster data chain traceability through the IPFS protocol (‘InterPlanetary File System’). On the back-end, from the access layer APIs (Web3, Rest HTTP, JSON RPC, JMS, SOAP), integrated oracles services would provide metadata from AI predictor nodes and the metadata of explanations for XAI ones. The outcome would be recorded as a hash on IPFS, flanked by support services where smart contracts can register, identify, perform, and track accuracy of the predictors in line with the other nodes, reporting the outcome on the DApps frontend. Some of the real applications that have been proposed are the financial profiling of bank customers, the prevention of tax evasion and electoral fraud, and making diagnoses based on medical images. These are scenarios where, today more than ever, frameworks are required so that the decisions made by AI models can be impartial and interpreted in a decentralised, tamper-proof way. To do this, they must be part of an incontestable system with traceability and immutable records that is accessible to all interested parties.
L’articolo Blockchain and Explainable AI for data trustworthiness proviene da Affidaty Blog.