RESEARCH

Learning Transferable Latent User Preferences for Human-Aligned Decision Making

ArXiv cs.AI · Thu, 14 May 2026 04:00:00 GMT

arXiv:2605.12682v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used as reasoning modules in many applications. While they are efficient in certain tasks, LLMs often struggle to produce human-aligned solutions. Human-aligned decision making requires

Read original source Discuss with A.S.I.S