This paper provides a comprehensive overview of the current state of knowledge on reasoning in Large Language Models (LLMs), including techniques for improving and eliciting reasoning in these models, methods and benchmarks for evaluating reasoning abilities, findings and implications of previous r
The paper discusses the limitations of large language models (LMs) and proposes a neuro-symbolic architecture called the Modular Reasoning, Knowledge and Language (MRKL) system that combines LMs with external knowledge sources and discrete reasoning modules to overcome these limitations.
The paper discusses the use of prompt engineering to leverage pre-trained language models for business process management (BPM) tasks. It identifies the potentials and challenges of prompt engineering for BPM research.
The paper introduces Chameleon, a plug-and-play compositional reasoning framework that augments large language models (LLMs) to address their inherent limitations and tackle a broad range of reasoning tasks. Chameleon synthesizes programs to compose various tools, including LLM models, off-the-shel
The paper presents a method for aligning language models with user intent by fine-tuning with human feedback. The resulting models, called InstructGPT, show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Th
The paper proposes Low-Rank Adaptation (LoRA) as an approach to reduce the number of trainable parameters for downstream tasks in natural language processing. LoRA injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable
The paper explores how generating a chain of thought can improve the ability of large language models to perform complex reasoning. The authors introduce a simple method called chain-of-thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiment
The paper discusses the limitations of pre-trained language representations in NLP systems and the need for task-specific datasets and fine-tuning. The authors show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with pri