prompting 25
- LLMs cannot find reasoning errors, but can correct them
- Everything of Thoughts
- Unleashing the potential of prompt engineering in Large Language Models
- Towards Better Chain-of-Thought Prompting Strategies
- Large Language Models Cannot Self-Correct Reasoning Yet
- A Survey of Chain of Thought Reasoning
- Graph of Thoughts
- Boosting Logical Reasoning in Large Language Models through a New Framework
- Automatically Correcting Large Language Models
- Tree of Thoughts
- Large Language Model Guided Tree-of-Thought
- Plan-and-Solve Prompting
- Self-Refine
- A Survey on In-context Learning
- Reasoning with Language Model Prompting
- Large Language Models are Better Reasoners with Self-Verification
- Large Language Models Are Human-Level Prompt Engineers
- Automatic Chain of Thought Prompting in Large Language Models
- Complexity-Based Prompting for Multi-Step Reasoning
- Large Language Models are Zero-Shot Reasoners
- Self-Consistency Improves Chain of Thought Reasoning in Language Models
- Selection-Inference
- Chain of Thought Prompting Elicits Reasoning in Large Language Models
- Pre-train, Prompt, and Predict
- Language Models are Few-Shot Learners