Sponsored by:

Mouser Electronics logo.

IEEE NSW CIS Chapter Seminar & UTS Research Seminar

Loading Events

Language agents, designed to interact with their environment and achieve goals through natural language, traditionally rely on Reinforcement Learning (RL). The emergence of Large Language Models (LLMs) has expanded their capabilities, offering greater autonomy and adaptability. However, there's been little attention on augmenting the morality of these agents. RL agents are often programmed with a focus on specific goals, neglecting moral consequences, while LLMs might incorporate biases from their training data, which could lead to immoral behaviours in practical applications. This presentation introduces our latest research endeavors focused on enhancing both the task performance and ethical conduct of language agents involved in intricate interactive tasks.
For RL agents, we use text-based games as a simulation environment, mirroring real-world complexities with embedded moral dilemmas. Our objective thus extends beyond improving game performance to developing agents that exhibit moral behaviour. We first develop a novel algorithm that boosts the moral reasoning of RL agents using a moral-aware learning module, enabling adaptive learning of task execution and ethical behavior. Cconsidering the implicit nature of morality, we further integrate a cost-effective human-in-the-loop strategy to guide RL agents toward moral decision-making. This method significantly reduces the necessary human feedback, demonstrating that minimal human input can enhance task performance and diminish immoral behaviour.
Shifting focus to LLM agents, we begin with a comprehensive review of morality in LLM research, scrutinizing their moral task performance, alignment strategies for moral incorporation, and the evaluation metrics provided by existing datasets and benchmarks. We then explore how LLM agents can improve their moral decision-making through reflection. Our experiments, conducted within text-based games, show that integrating reflection enables LLM agents to make more ethical decisions when confronted with moral dilemmas.
Speaker(s): Ling Chen
CB11.09.118, University of Technology Sydney, Broadway, Ultimo, 2007, Ultimo, New South Wales, Australia, 2007

Share This Story, Choose Your Platform!

Go to Top