RESEARCH

Belief or Circuitry? Causal Evidence for In-Context Graph Learning

ArXiv cs.AI · Tue, 12 May 2026 04:00:00 GMT

arXiv:2605.08405v1 Announce Type: new Abstract: How do LLMs learn in-context? Is it by pattern-matching recent tokens, or by inferring latent structure? We probe this question using a toy graph random-walk across two competing graph structures. This task's answer is, in principle

Read original source Discuss with A.S.I.S