## Prompt testing
- TypeScript [promptfoo](https://www.promptfoo.dev/)
- Python [PromptTools](https://github.com/hegelai/prompttools)
- Python [PromptFlow](https://microsoft.github.io/promptflow/)
## Prompt design
- Zero-shot Chain-of-Thought baseline: [Chain-of-Thought Prompting](Chain-of-Thought%20Prompting.md)
- Few-shot Chain-of-Thought: [Plan-and-Solve Prompting](Plan-and-Solve%20Prompting.md)
- Retrieval-Augmented Generation: [RAG for Knowledge-Intensive NLP Tasks](https://proceedings.neurips.cc/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf)
- Reasoning & Acting: [ReAct Prompting](ReAct%20Prompting.md)
## Prompt tuning
- Prefix Tuning: [Optimizing Continuous Prompts for Generation](https://aclanthology.org/2021.acl-long.353/)
- Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning](Prompt%20Tuning.md)
- P-Tuning: [GPT Understands, Too](P-Tuning.md)
## Prompt optimization
- AutoPrompt: [Eliciting Knowledge from Language Models with Automatically Generated Prompts](https://aclanthology.org/2020.emnlp-main.346/)
- APO: [Automatic Prompt Optimization with “Gradient Descent” and Beam Search](https://arxiv.org/abs/2305.03495)
- APE: [Large Langauge Models are Human-level Prompt Engineers](Automatic%20Prompt%20Engineer%20(APE).md)
- OPRO: [OPRO - Large Language Models as Optimizers](OPRO%20-%20Large%20Language%20Models%20as%20Optimizers.md)
## Model tuning
- Distilling: [Distilling Step-by-Step](Distilling%20Step-by-Step.md)
- Finetuning: [Large Language Models can Self-improve](https://openreview.net/forum?id=NiEtU7blzN)
- Low-Rank Adaptation: [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314)