Bei Interesse an einem der unten aufgeführten Themen bitten wir um eine detaillierte Bewerbung mit Ihrer konkreten Themenidee, Ihrem Lebenslauf und Ihrem Transcript of Records an den jeweiligen Supervisor.
Alle Abschlussarbeiten am Lehrstuhl sollen generell auf Englisch angefertigt werden.
Für Studierende, die NICHT WiWi/WINF studieren: Bitte klären Sie bei Bedarf bitte vorab mit der jeweiligen Studiengangskoordination, ob eine Betreuung durch unseren Lehrstuhl möglich ist. Für Studierende anderer Fakultäten (TechFak, NatFak) ist dies beispielsweise oft nicht möglich.
Applications open until 01.04.2026!
Supervisor: Leonie Manzke
Level: Master
Timeframe: Start possible from April 2026; Submission possible by October 2026 at the earliest
When consumers use AI search summaries to inform purchase decisions, they are often presented with compact, polished guidance that highlights certain claims while deemphasizing others. As a result, users may rely on the summary’s most salient information rather than independently verifying the underlying sources. This makes AI summary contexts a strong candidate for studying trust calibration: whether people appropriately accept high-quality information while rejecting dubious or misleading claims (source).
Research Gap
While trust in AI and credibility assessment have been studied across many contexts, there is limited experimental evidence on how specific interface elements in AI search summaries shape (a) the assessment of credibility of claims within such a summary, (b) detection of dubious content, and (c) alignment between the information provided and subsequent opinion formation and behavior. In particular, interface cues that suggest breadth or consensus (for example, a “source count” label) or early attention-demanding cues (for example, an entry-sentence verdict) may shift which claims dominate advice decisions, potentially increasing overreliance when dubious claims are present.
Thesis Goals
This Master’s thesis aims to test how interface elements in a realistic, online AI Overview-style summary influence trust calibration and decision-making in a consumer advice task (= advising a fictional neighbor what to do).
The experiment will be implemented in collaboration with researchers from the Nürnberg Institute for Market Decisions (NIM), who will co-supervise the thesis.
Requirements
- Strong understanding of experimental design, measurement, and statistics.
- It is highly recommended you have taken either of our courses “Experimentelle Verhaltensforschung in Data Science” or “IS for Behavior Change”.
- Basic programming skills for stimulus implementation and click logging are not essential, but beneficial.
Applications open until 15.03.2026!
Level: Bachelor
Timeframe: Start possible from April 2026; Submission possible by August 2026 at the earliest
Supervisor: Leonie Manzke
When consumers seek information online to inform a purchase decision, they often aim to quickly extract actionable guidance rather than evaluate every available source in depth. AI-assisted search and AI-generated summaries have reshaped this process by presenting compact, synthesized claims in polished language, which many users treat as a “one-stop solution” (Kaiser et al., 2025). As a result, decision-making now relies on a fundamentally changed process for judging the credibility of information. Users must decide which summary claims to accept or discount, and translate a mix of credible and dubious information into judgments.
Research Gap
Credibility research is well established, but past work has focused on the credibility of websites, social media content, customer reviews, or traditional online search (Fogg, 2003; Suarez-Lledo & Alvarez-Galvez, 2021; Cheung, Sia & Kuan, 2012; Brand & Reith, 2022; Kammerer & Gerjets, 2012). Since AI summaries have become ubiquitous, the process of consumer information-seeking has been fundamentally transformed. Therefore, there is a need for an integrative framework that organizes which antecedents are likely to shape credibility assessments of claims found in AI summaries, and how these antecedents may operate during online information-seeking.
Thesis Goals
This Bachelor’s thesis aims to conduct a scoping literature review (Arksey & O’Malley, 2005; Levac et al., 2010) in order to develop a conceptual framework that explains credibility assessments of claims presented in AI-generated summaries in online information-seeking processes.
Requirements
- Willingness to write the thesis in English.
- Interest in interdisciplinary work (consumer judgment, information behavior, credibility research, human–AI interaction).
- Basic familiarity with empirical research is beneficial.
References
Arksey, H., & O’Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8(1), 19–32. https://doi.org/10.1080/1364557032000119616
Brand, B. M., & Reith, R. (2022). Cultural differences in the perception of credible online reviews – The influence of presentation format. Decision Support Systems, 154, 113710. https://doi.org/10.1016/j.dss.2021.113710
Cheung, C. M. K., Sia, C. L., & Kuan, K. K. Y. (2012). Is this review believable? A study of factors affecting the credibility of online consumer reviews from an ELM perspective. Journal of the Association for Information Systems, 13(8), 618–635. https://doi.org/10.17705/1jais.00305
Fogg, B. J. (2003). Prominence-interpretation theory: Explaining how people assess credibility online. In CHI ’03 extended abstracts on Human factors in computing systems (p. 722). https://doi.org/10.1145/765891.765951
Kaiser, C., Kaiser, J., Schallner, R., & Schneider, S. (2025). A new era of online search? A large-scale study of user behavior and personal preferences during practical search tasks with generative AI versus traditional search engines. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1–7). https://doi.org/10.1145/3706599.3720123
Kammerer, Y., & Gerjets, P. (2012). How search engine users evaluate and select web search results: The impact of the search engine interface on credibility assessments. In D. Lewandowski (Ed.), Web engine research (pp. 205–233). Emerald Group Publishing Limited. https://doi.org/10.1108/S1876-0562(2012)002012a012
Levac, D., Colquhoun, H., & O’Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science, 5(1), Article 69. https://doi.org/10.1186/1748-5908-5-69
Suarez-Lledo, V., & Alvarez-Galvez, J. (2021). Prevalence of health misinformation on social media: Systematic review. Journal of Medical Internet Research, 23(1), Article e17187. https://doi.org/10.2196/17187
Supervisor: Sophie Kuhlemann
Background
Advances in computational power and declining data-processing costs have accelerated the diffusion of artificial intelligence (AI) across organizations, as firms increasingly implement AI-based systems with the expectation of enhancing performance and welfare (Glikson & Wilson, 2023; Ludwig & Achtziger, 2023). Despite these expectations, prior studies suggest that the anticipated performance gains from AI adoption often fail to materialize, leaving substantial welfare potential untapped (Vaccaro et al., 2024; De Freitas et al., 2023). This has shifted scholarly attention toward the human side of AI deployment, raising the question of the conditions under which decision makers disregard or resist AI-based decision support.
A central concept in this literature is algorithm aversion, which refers to individuals’ tendency to prefer human judgment over algorithmic advice (Burton et al., 2020; Mahmud et al., 2022; Jussupow et al., 2020). Dietvorst et al. (2015) initially attributed algorithm aversion to individuals’ heightened sensitivity to algorithmic errors compared to human errors. Subsequent research has identified additional drivers, including concerns that algorithms fail to account for individual circumstances (Longoni et al., 2019) and resistance in domains perceived as subjective or intuition-based (Castelo et al., 2019). At the same time, other studies highlight algorithm appreciation, where individuals value and rely on algorithmic input, resulting in mixed and sometimes contradictory empirical findings (Logg et al., 2019). This heterogeneity complicates the derivation of robust conclusions about when and why algorithm aversion occurs.
One plausible explanation for these inconsistencies lies in methodological variation across studies. Prior research differs in the operationalization of dependent variables (Zehnle et al., 2024), the use of hypothetical versus real decision contexts (Logg & Schlund, 2024), and the conceptualization of the human–AI relationship (Jussupow et al., 2024). Consequently, algorithm aversion may be captured in ways that are not fully comparable across studies, potentially producing apparent contradictions driven more by measurement and design choices than by substantive differences.
Thesis Goals
Against this background, this thesis aims to provide a descriptive methodological review of the algorithm aversion literature. Following a structured search and screening process, the review will systematically code and analyze the methodological characteristics of existing studies. The objectives are to map how algorithm aversion is operationalized and measured, identify dominant study designs, and uncover methodological blind spots or underexplored areas. By clarifying how algorithm aversion has been studied to date, the review seeks to facilitate the synthesis of existing findings and inform methodological decisions in future research.
Level: Master or Bachelor, provided requirements are fulfilled.
Methodological approach:
Descriptive Literature Review: What methodological limitations and underexplored areas can be identified in the algorithm aversion literature?
Requirements
- Interest in new technologies and user-centric perspectives, particularly human–AI interaction.
- Good (!) knowledge of research methods and the ability to distinguish between common approaches (e.g., surveys, experiments, interviews, and field studies).
Support provided
- Students will receive access to core literature on algorithm aversion.
- Students will receive access to „how to“ literature regarding descriptive reviews.
- Students will receive a predefined search string for their review as well as guidance on screening and coding procedures.
References
Burton, J. W., Stein, M., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220–239. https://doi.org/10.1002/bdm.2155
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-Dependent Algorithm Aversion. Journal of Marketing Research, 56(5), 809–825. https://doi.org/10.1177/0022243719851788
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err.
Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of Empirical Research.
Jussupow, E., Benbasat, I., & Heinzl, A. (2024). An Integrative Perspective on Algorithm Aversion and Appreciation in Decision-Making.
Jussupow, E., Benbasat, I. J., & Heinzl, A. (2020, Juni 15). Why are we averse towards algorithms? A comprehensive Literature Review on Algorithm Aversion.
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
Logg, J., & Schlund, R. (2024). A simple explanation reconciles “algorithm aversion” and “algorithm appreciation”: Hypotheticals vs. Real judgments.
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to Medical Artificial Intelligence. Journal of Consumer Research, 46(4), 629–650. https://doi.org/10.1093/jcr/ucz013
Ludwig, J., & Achtziger, A. (2021). Cognitive misers on the web: An online-experiment of incentives, cheating, and cognitive reflection. Journal of Behavioral and Experimental Economics, 94. Scopus. https://doi.org/10.1016/j.socec.2021.101731
Mahmud, H., Islam, A. K. M. N., Ahmed, S. I., & Smolander, K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change, 175, 121390. https://doi.org/10.1016/j.techfore.2021.121390
Zehnle, M., Hildebrand, C., & Valenzuela, A. (2025). Not all AI is created equal: A meta-analysis revealing drivers of AI resistance across markets, methods, and time. International Journal of Research in Marketing, S0167811625000114. https://doi.org/10.1016/j.ijresmar.2025.02.005
Please note: Applications are closed.
Supervisor: Laura Schneider
According to the extended mind hypothesis, human cognition extends beyond the brain and nervous system to the body and environmental tools (Clark & Chalmers, 1998). Using technological tools to facilitate cognitive processes is often referred to as „cognitive offloading.“ Or “distributed cognition” (Risko & Gilbert, 2016). In the age of generative AI and its increasing capabilities, the possibilities for humans to offload energy- and time-consuming cognitive processes to AI are numerous and evolving rapidly.
Research Gap
In many human-AI collaboration scenarios, humans remain the “final authority” to accept or reject AI recommendations. Therefore, using AI for task completion not only necessitates controlling and monitoring one’s own cognitive processes (often referred to as metacognition) but also evaluating AI’s processes and outputs (Dunn et al., 2021; Tankelevitch et al., 2024). Despite the increasing metacognitive demands of generative AI and the relevance of accurate metacognition to prevent under-/overreliance on AI outputs, there is a lack of research investigating the role of human metacognition in (successful) Human-AI collaborations.
Thesis Goals
Bachelor’s and Master’s theses on this topic aim to investigate the dual role of metacognition: first, in the decision-making process of when to offload cognitive tasks to AI; and second, in how humans evaluate AI outputs once collaboration occurs. Research should explore how targeted interventions can be designed to enhance metacognitive accuracy during human-AI collaboration. The goal is to gain conceptual and empirical insights that will advance our understanding of human-AI cognitive partnerships and inform future design approaches.
Level: Bachelor or Master
Research Approaches
(Systematic) Literature Review
- Conduct a (systematic) literature search exploring interventions (such as cognitive forcing strategies) that support metacognitive monitoring/control and prevent excessive cognitive offloading.
- Review interdisciplinary research and map findings onto generative AI applications.
- Identify theoretical frameworks that can explain metacognitive processes in human-AI collaboration.
Experimental Approaches
- Design creative interventions to support human metacognition before/while using AI and develop experimental protocols to test their effectiveness.
- Investigate the impacts of using generative AI on human metacognition and cognition(e.g., decision-making, critical thinking, problem-solving).
- Explore how different AI interface designs (e.g. openAI’s reasoning model) affect users‘ metacognitive accuracy.
Surveys and Interviews
- Conduct surveys or interviews to investigate factors that determine whether individuals offload cognitive tasks to AI.
- Explore positive and negative consequences of cognitive offloading to AI.
- Study domain-specific differences in metacognitive strategies when collaborating with AI.
Requirements
- Interest in interdisciplinary research combining cognitive psychology and AI.
- Basic understanding of experimental design or qualitative research methods.
- Willingness to engage with both technical and psychological literature.
- For experimental approaches: Basic programming skills are beneficial.
References
Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19. https://doi.org/10.1093/analys/58.1.7
Dunn, T. L., Gaspar, C., McLean, D., Koehler, D. J., & Risko, E. F. (2021). Distributed metacognition: Increased Bias and Deficits in Metacognitive Sensitivity when Retrieving Information from the Internet. Technology, Mind, and Behavior, 2(3). https://doi.org/10.1037/tmb0000039
Risko, E. F., & Gilbert, S. J. (2016). Cognitive Offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002
Tankelevitch, L., Kewenig, V., Simkute, A., Scott, A. E., Sarkar, A., Sellen, A., & Rintel, S. (2024). The Metacognitive Demands and Opportunities of Generative AI. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, 1–24. https://doi.org/10.1145/3613904.3642902