
01 Mar Prof Calzada’s article ‘Trustworthy AI for Whom? GenAI Detection Techniques of Trust Through Decentralized Web3 Ecosystems’ has been accepted for publication stemming from his Horizon Europe ENFIELD project
Professor Igor Calzada is pleased to announce that his latest research article, “Trustworthy AI for Whom? GenAI Detection Techniques of Trust Through Decentralized Web3 Ecosystems,” co-authored with BME scholars, has been accepted for publication in Big Data and Cognitive Computing (BDCC, ISSN 2504-2289; Impact Factor 3.1; CiteScore 7.1), an open-access journal by MDPI. The article, officially accepted on March 1, 2025, contributes to the ongoing debate on trust in artificial intelligence (AI), particularly in the context of Generative AI (GenAI) and Web3 ecosystems.
Here the final version of the preprint (the accepted version in its current form) while is being processed by the journal:
Calzada, I., Németh, G., & Al-Radhi, M. S. (2025), Trustworthy AI for Whom? GenAI Detection Techniques of Trust Through Decentralized Web3 Ecosystems. Big Data & Cognitive Computing. https://doi.org/10.20944/preprints202501.2018.v1
The Significance of This Research
As AI systems become more embedded in society, challenges related to trust, governance, and fairness have taken center stage. In this paper, Prof. Calzada and his co-authors examine how GenAI detection techniques can reinforce trust through decentralized Web3 infrastructures. By exploring the intersection of AI and blockchain technologies, the study proposes innovative approaches to addressing AI governance challenges.
Related Initiatives and Ongoing Discussions
This research aligns with several initiatives and policy discussions on AI governance:
- Prof Calzada was selected as Senior Researcher for the project: Democracy in the Age of Algorithm: Enhancing Transparency and Trust in AI-Generated Content through Innovative Detection Techniques
- He started this collaboration with BME Budapest University of Technology & Economics since 1st October 2024 until 28th February 2025 by leading the project.
- During this time, he conducted action research on AI Act, Draghi Report and Web3 Techniques.
- During the General Assembly of the Horizon Europe ENFIELD project, a Workshop was arranged on February 14, 2025, in Budapest, brought together leading experts to discuss AI governance, trust, and policy-making. The agenda and call for papers remain available for reference: Workshop Agenda & Call for Papers.
- This ENFIELD Project suggested by Prof Calzada opened up new multidisciplinary pathways by including policy challenges and chatbot comparison by investigating AI’s societal impact in Europe, particularly in governance and regulation. More details on the “AI for Whom?” workshop can be found here: https://www.enfield-project.eu/aiforwhomws.
- The discussion on Generative AI and Urban AI Policy Challenges is ongoing. Thus, Prof Calzada launched and announced respectively in the Workshop a Call for Papers and an Special Issue in the journal Transforming Government: People, Process, and Policy part of Emerald Publishing: https://www.emeraldgrouppublishing.com/calls-for-papers/generative-ai-and-urban-ai-policy-challenges-ahead-trustworthy-ai-whom. The Call for Papers is opened until 31st July 2025.
- In addition, a book is in print by Edward Elgar publishing, where Prof Calzada develops several research pathways in the crossroads between multidisciplinarity and roadmapping around GenAI. Here the reference of the book that will be out by August 2025:
- Visvizi, A., Kozlowski, K., Calzada, I., & Troisi, O. (2025), Multidisciplinary Movements in AI and Generative AI: Society, Business, Education. Chentelham: Edward Elgar.
- In parallel, another Special Issue has been submitted to the journal Discover Cities. It will be announced in due course in relation to another event that stems from ENFIELD impact and dissemination activities.
Future Directions
Prof. Calzada’s research aims to redefine trust in (Gen or Urban) AI by integrating decentralized and transparent mechanisms that promote accountability. He invites scholars, policymakers, and industry leaders to engage with these findings and contribute to the broader discourse on Trustworthy AI.
Sorry, the comment form is closed at this time.