


AI and Us: Exploring public trust in AI
The IANUS consortium runs various case studies and events that aim to explore public trust in science and various scientific fields. As a consortium partner, the team of the Aristotle University of Thessaloniki (Data and Web Science Lab–Datalab) will explore public attitudes and trust in Artificial Intelligence (AI), focusing on Generative AI. The relevant event is co-organised by Datalab and the Citizen Science Hub of Aristotle University of Thessaloniki.
Study Title
AI and Us: Exploring public trust in AI
Principal Investigator
Prof. Athena Vakali, School of Informatics, Aristotle University of Thessaloniki
Funding Organization
HORIZON Research and Innovation Actions – Grant Agreement “INspiring and ANchoring TrUst in Science” - IANUS 101058158
Data Controller
Aristotle University of Thessaloniki (AUTh)
Data Protection Officer (DPO)
Names of the coordinators of the research/scientific coordinator:
Aristotle University of Thessaloniki (AUTH), Greece- Athena Vakali, Professor, avakali@csd.auth.gr
- Sofia Yfantidou, Researcher and PhD Candidate, syfantid@csd.auth.gr
- Maria Michali, Researcher and PhD Candidate, mmichals@csd.auth.gr
- Eva Paraschou, Researcher and MSc Student, eparascho@csd.auth.gr
- Stefanos Raphael Kalogeros, Undergraduate Student, stefkalo@csd.auth.gr
Below, we give you provide you with some necessary information about the case study we are carrying out as part of the European Project "INspiring and ANchoring TrUst in Science (IANUS)" (No. Contract: 101058158), which is funded by the European Commission (EUROPEAN COMMISSION-Research Executive Agency) and we invite you to participate. Your participation is voluntary.
You can discuss this study and the consent form with other people, such as family, friends, or anyone you feel comfortable with. You do not have to decide right away. You can decide whether you want to participate in the study after you have thought about it. There may be words you need help understanding or things you would like more details about. You can stop anytime and ask the research coordinators questions.
To investigate whether our research hypothesis is correct, however obvious it may seem, we need to test it by following the scientific methodology. We hypothesize that various factors influence the public's trust in Generative Artificial Intelligence (AI), such as demographics, digital literacy, scope of application, and the involvement or not of human factors. This research aims to test this hypothesis by conducting a case study that will explore the public's trust in AI for political discourse analysis, given the limitations of these technologies in terms of "illusions" and the spread of misinformation.
After the end of our study, the researchers will pseudonymize your data so that it becomes impossible to link it to you in any way (i.e., any identifying information will be replaced by a random identifier). The research team will then analyze the pseudonymized data to draw conclusions and provide guidelines for developing regulatory frameworks that will protect the public from potential risks and ensure the responsible use of AI.
The research participants should be over 18, live in the E.U., and be literate.
You do not have to participate in the study if you don’t want to. Even if you say “yes” now, you can change your mind later and withdraw from the study anytime.
The participation is free of charge.
If you choose to participate in the study:
- At the beginning of the case study, you will be asked to complete questionnaires about your demographics, digital literacy, political beliefs and attitudes, and trust in institutions and organizations.
- You will be asked to use an online platform specifically designed for this case study, where you will be required to rate a small number of tasks based on the degree of trust they inspire in you through specially designed trust measurement instruments.
- You may be asked to participate in an open-ended interview in order to explore in more detail the factors that influence your trust in Generative AI.
- Data regarding the "performance" of participants during the tasks
a. Trust Measurement Questionnaire for each task (four in total)
- Data from surveys
a. Responses related to demographics
b. Responses related to views and attitudes toward AI
c. Responses related to political views and attitudes
d. Responses related to trust in research and funding bodies/institutions
- Optional data from interviews
a. Transcripts of audio recordings on trust in AI and participation in tasks
Your personal data will be pseudonymized, meaning that your real name will be replaced by a random number, so as not to be correlated to you. Only the researchers of the study, as mentioned above, will have access to your pseudonymized data.
Your personal data is not intended to be transferred to a third country or to an international organization that does not comply with the GDPR.
The participation in this study is absolutely safe, since it does not affect your bodily and mental integrity.
Of course not, the participation in the study is completely harmless.
If you register for the study, then you will have the opportunity to learn about the limitations of Generative AI and its potential impact on politics and to actively act on this by contributing to the production of guidelines for the development of regulatory frameworks that will protect the public from potential risks and ensure the responsible use of Generative AI. At the same time, your participation in the study can give you the satisfaction of contributing to the advancement of scientific research.
When the research is finished, we will be able to explain everything we have learned to you. An informational brochure will be available upon your request. Later, we will inform other people about our research and what we have found. This will be accomplished by writing articles and meeting with people interested in our work.
Your participation is not forced. You can stop the research at any time if you wish.
Consent is provided till the end (including evaluation) of the European project (31st December 2025) or until it is revoked by sending an e-mail to avakali@csd.auth.gr or by sending the application form enclosed at the end of this document to the address of the coordinator of the research/scientific. The right to withdraw consent at any time does not affect the lawfulness of the processing based on the consent given before its withdrawal.
The processing of your personal data is based on your informed consent to this processing for the specific purpose. Your personal data will be pseudonymized and secured in a computer located in the facilities of AUTH. So as the safety of the data is ensured, access to this computer will be allowed only to the researchers of the study by using credentials, whereas all necessary measures shall be taken for the enhancement of the system’s security.
You have the right to request from the Head of the Research access to, rectification, or erasure of your personal data, restriction of processing concerning your data, or object to processing, as well as the right to data portability. For any inquiry or guidance regarding your rights, you could email avakali@csd.auth.gr or phone at +30-231099.8415. Any change in your personal data will occur within 30 days of your communication with the Principal Investigator.
If you have any questions about your personal data and relevant rights or believe your rights are being violated, you can contact the Data Protection Officer of the Aristotle University of Thessaloniki (dataprotection@auth.gr). For additional protection, you have the right to lodge a complaint with the Hellenic Data Protection Authority (www.dpa.gr).