Verfügbare Themen

Bei Interesse an einem der unten aufgeführten Themen bitten wir um eine detaillierte Bewerbung mit Ihrer konkreten Themenidee, Ihrem Lebenslauf und Ihrem Transcript of Records an den jeweiligen Supervisor.

Alle Abschlussarbeiten am Lehrstuhl sollen auf Englisch angefertigt werden.

Für Studierende, die NICHT WiWi/WINF studieren: Bitte klären Sie bei Bedarf bitte vorab mit dem jeweiligen Studiengangskoordinator, ob eine Betreuung durch unseren Lehrstuhl möglich ist. Für Studierende anderer Fakultäten (TechFak, NatFak) ist dies beispielsweise oft nicht möglich.

Supervisor: Leonie Manzke

Generative Artificial Intelligence (GenAI) has rapidly become pervasive in our everyday lives. However, seminal studies are already showing a range of negative side-effects of GenAI use on cognitive engagement and abilities, risking cognitive atrophy and overreliance (Kosmyna et al., 2025; Lee et al., 2025; Schoeffer et al., 2025; Zhai et al., 2024). Therefore, it is imperative that human-AI interfaces are designed in a way that promotes deep engagement and the critical reflection of outputs (Yatani et al., 2024).

This thesis is meant to contribute to this endeavor by developing and experimentally testing a design intervention in a (mock-up) LLM interaction interface like ChatGPT that promotes reflection and critical thinking. Students may contribute their own specific ideas for study contexts and settings. Possible interventions could be elements that induce deliberate friction, increase transparency or provide metacognitive scaffolds.

The experiment will be implemented in collaboration with researchers from the Nürnberg Institute for Market Decisions (NIM), who will co-supervise the thesis.

Level: Master, Bachelor only possible upon fulfillment of requirements

Requirements

  • Enrolled Master Student in International Information Systems, International Business Studies, Economics, Marketing, etc. Bachelor students can only be considered upon careful consideration of timeline feasibility.
  • Previous knowledge and/or experience in the application of statistical methods (e.g., ANOVA, LMM, etc.)
  • Willingness to write their thesis in English.

Highly Desirable: Previous experience in experimental methodology, e.g., through our courses „Experimentelle Verhaltensforschung in Data Science (EVIDS)“ (Bachelor) or the seminar „Information Systems for Behavior Change (ISBC)“ (Master).

References

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2506.08872

Lee, H.-P. (Hank), Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–22. https://doi.org/10.1145/3706598.3713778

Schoeffer, J., Jakubik, J., Vossing, M., Kuhl, N., & Satzger, G. (2025). AI Reliance and Decision Quality: Fundamentals, Interdependence, and the Effects of Interventions. Journal of Artificial Intelligence Research, 82, 471–501.

Yatani, K., Sramek, Z., & Yang, C.-L. (2024). AI as Extraherics: Fostering Higher-order Thinking Skills in Human-AI Interaction (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2409.09218

Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environments, 11(1), 28. https://doi.org/10.1186/s40561-024-00316-7

Please note: Applications are closed.

Supervisor: Laura Schneider

According to the extended mind hypothesis, human cognition extends beyond the brain and nervous system to the body and environmental tools (Clark & Chalmers, 1998). Using technological tools to facilitate cognitive processes is often referred to as „cognitive offloading.“ Or “distributed cognition” (Risko & Gilbert, 2016). In the age of generative AI and its increasing capabilities, the possibilities for humans to offload energy- and time-consuming cognitive processes to AI are numerous and evolving rapidly.

Research Gap

In many human-AI collaboration scenarios, humans remain the “final authority” to accept or reject AI recommendations. Therefore, using AI for task completion not only necessitates controlling and monitoring one’s own cognitive processes (often referred to as metacognition) but also evaluating AI’s processes and outputs (Dunn et al., 2021; Tankelevitch et al., 2024). Despite the increasing metacognitive demands of generative AI and the relevance of accurate metacognition to prevent under-/overreliance on AI outputs, there is a lack of research investigating the role of human metacognition in (successful) Human-AI collaborations.

Thesis Goals

Bachelor’s and Master’s theses on this topic aim to investigate the dual role of metacognition: first, in the decision-making process of when to offload cognitive tasks to AI; and second, in how humans evaluate AI outputs once collaboration occurs. Research should explore how targeted interventions can be designed to enhance metacognitive accuracy during human-AI collaboration. The goal is to gain conceptual and empirical insights that will advance our understanding of human-AI cognitive partnerships and inform future design approaches.

Level: Bachelor or Master

Research Approaches

(Systematic) Literature Review

  • Conduct a (systematic) literature search exploring interventions (such as cognitive forcing strategies) that support metacognitive monitoring/control and prevent excessive cognitive offloading.
  • Review interdisciplinary research and map findings onto generative AI applications.
  • Identify theoretical frameworks that can explain metacognitive processes in human-AI collaboration.

Experimental Approaches

  • Design creative interventions to support human metacognition before/while using AI and develop experimental protocols to test their effectiveness.
  • Investigate the impacts of using generative AI on human metacognition and cognition(e.g., decision-making, critical thinking, problem-solving).
  • Explore how different AI interface designs (e.g. openAI’s reasoning model) affect users‘ metacognitive accuracy.

Surveys and Interviews

  • Conduct surveys or interviews to investigate factors that determine whether individuals offload cognitive tasks to AI.
  • Explore positive and negative consequences of cognitive offloading to AI.
  • Study domain-specific differences in metacognitive strategies when collaborating with AI.

Requirements

  • Interest in interdisciplinary research combining cognitive psychology and AI.
  • Basic understanding of experimental design or qualitative research methods.
  • Willingness to engage with both technical and psychological literature.
  • For experimental approaches: Basic programming skills are beneficial.

References

Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19. https://doi.org/10.1093/analys/58.1.7

Dunn, T. L., Gaspar, C., McLean, D., Koehler, D. J., & Risko, E. F. (2021). Distributed metacognition: Increased Bias and Deficits in Metacognitive Sensitivity when Retrieving Information from the Internet. Technology, Mind, and Behavior, 2(3). https://doi.org/10.1037/tmb0000039

Risko, E. F., & Gilbert, S. J. (2016). Cognitive Offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002

Tankelevitch, L., Kewenig, V., Simkute, A., Scott, A. E., Sarkar, A., Sellen, A., & Rintel, S. (2024). The Metacognitive Demands and Opportunities of Generative AI. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, 1–24. https://doi.org/10.1145/3613904.3642902

Supervisor: Sophie Kuhlemann

Background

Advances in computational power and declining data-processing costs have accelerated the diffusion of artificial intelligence (AI) across organizations, as firms increasingly implement AI-based systems with the expectation of enhancing performance and welfare (Glikson & Wilson, 2023; Ludwig & Achtziger, 2023). Despite these expectations, prior studies suggest that the anticipated performance gains from AI adoption often fail to materialize, leaving substantial welfare potential untapped (Vaccaro et al., 2024; De Freitas et al., 2023). This has shifted scholarly attention toward the human side of AI deployment, raising the question of the conditions under which decision makers disregard or resist AI-based decision support.

A central concept in this literature is algorithm aversion, which refers to individuals’ tendency to prefer human judgment over algorithmic advice (Burton et al., 2020; Mahmud et al., 2022; Jussupow et al., 2020). Dietvorst et al. (2015) initially attributed algorithm aversion to individuals’ heightened sensitivity to algorithmic errors compared to human errors. Subsequent research has identified additional drivers, including concerns that algorithms fail to account for individual circumstances (Longoni et al., 2019) and resistance in domains perceived as subjective or intuition-based (Castelo et al., 2019). At the same time, other studies highlight algorithm appreciation, where individuals value and rely on algorithmic input, resulting in mixed and sometimes contradictory empirical findings (Logg et al., 2019). This heterogeneity complicates the derivation of robust conclusions about when and why algorithm aversion occurs.

One plausible explanation for these inconsistencies lies in methodological variation across studies. Prior research differs in the operationalization of dependent variables (Zehnle et al., 2024), the use of hypothetical versus real decision contexts (Logg & Schlund, 2024), and the conceptualization of the human–AI relationship (Jussupow et al., 2024). Consequently, algorithm aversion may be captured in ways that are not fully comparable across studies, potentially producing apparent contradictions driven more by measurement and design choices than by substantive differences.

Thesis Goals

Against this background, this thesis aims to provide a descriptive methodological review of the algorithm aversion literature. Following a structured search and screening process, the review will systematically code and analyze the methodological characteristics of existing studies. The objectives are to map how algorithm aversion is operationalized and measured, identify dominant study designs, and uncover methodological blind spots or underexplored areas. By clarifying how algorithm aversion has been studied to date, the review seeks to facilitate the synthesis of existing findings and inform methodological decisions in future research.

Level: Master or Bachelor, provided requirements are fulfilled.

Methodological approach:

Descriptive Literature Review: What methodological limitations and underexplored areas can be identified in the algorithm aversion literature?

Requirements

  • Interest in new technologies and user-centric perspectives, particularly human–AI interaction.
  • Good (!) knowledge of research methods and the ability to distinguish between common approaches (e.g., surveys, experiments, interviews, and field studies).

Support provided

  • Students will receive access to core literature on algorithm aversion.
  • Students will receive access to „how to“ literature regarding descriptive reviews.
  • Students will receive a predefined search string for their review as well as guidance on screening and coding procedures.

References

Burton, J. W., Stein, M., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220–239. https://doi.org/10.1002/bdm.2155

Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-Dependent Algorithm Aversion. Journal of Marketing Research, 56(5), 809–825. https://doi.org/10.1177/0022243719851788

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err.

Glikson, E., & Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of Empirical Research.

Jussupow, E., Benbasat, I., & Heinzl, A. (2024). An Integrative Perspective on Algorithm Aversion and Appreciation in Decision-Making.

Jussupow, E., Benbasat, I. J., & Heinzl, A. (2020, Juni 15). Why are we averse towards algorithms? A comprehensive Literature Review on Algorithm Aversion.

Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005

Logg, J., & Schlund, R. (2024). A simple explanation reconciles “algorithm aversion” and “algorithm appreciation”: Hypotheticals vs. Real judgments.

Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to Medical Artificial Intelligence. Journal of Consumer Research, 46(4), 629–650. https://doi.org/10.1093/jcr/ucz013

Ludwig, J., & Achtziger, A. (2021). Cognitive misers on the web: An online-experiment of incentives, cheating, and cognitive reflection. Journal of Behavioral and Experimental Economics, 94. Scopus. https://doi.org/10.1016/j.socec.2021.101731

Mahmud, H., Islam, A. K. M. N., Ahmed, S. I., & Smolander, K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change, 175, 121390. https://doi.org/10.1016/j.techfore.2021.121390

Zehnle, M., Hildebrand, C., & Valenzuela, A. (2025). Not all AI is created equal: A meta-analysis revealing drivers of AI resistance across markets, methods, and time. International Journal of Research in Marketing, S0167811625000114. https://doi.org/10.1016/j.ijresmar.2025.02.005