高阶提示词工程原理,教学,总结

高阶提示工程

  • 注:需要有一定的英文功底

1. Chain-of-Thought (CoT)

概念:CoT是一种逐步推理的方法,通过一系列连贯的步骤引导模型得出结论,就像解一道数学题时的步骤。

类比:想象你在解一道数学题,你需要一步步列出每个计算过程。每个步骤都依赖于前一个步骤,直到最终得出答案。

示例

Prompt: “What is 15% of 200?”

Thought process:

  1. “15% of 200 is calculated by first finding 10%.”
  2. “10% of 200 is 20.”
  3. “Then we find 5%, which is half of 10%.”
  4. “5% of 200 is 10.”
  5. “Adding these together, 15% of 200 is 20 + 10, which is 30.”

2. Tree-of-Thought (ToT)

概念:ToT是一种分支推理的方法,每个推理步骤可以生成多个分支,类似于决策树,使模型可以并行探索多个路径。

类比:想象你在探索一个迷宫,你可以同时尝试不同的路径,记录每个路径的进展,直到找到出口。

示例

Prompt: “Find the optimal route from A to B in the city.”

Thought process:

  1. “Consider the route via Main Street.”
  2. “Consider the route via Second Avenue.”
  3. “Evaluate traffic conditions and travel time for each route.”
  4. “Select the route with the shortest travel time.”

3. Graph-of-Thought (GoT)

概念:GoT将推理过程建模为有向图,每个节点代表一个思维步骤,节点之间的边表示推理路径,从而增强模型的复杂推理能力。

类比:想象你在构建一个关系图,图中的每个节点代表一个任务或概念,节点之间的连线表示它们之间的关系和影响。

示例

Prompt: “Analyze the impact of a new policy on the economy.”

Thought process:

  1. “Identify key economic indicators affected by the policy.”
  2. “Create a graph with nodes representing each indicator.”
  3. “Draw edges showing how the policy impacts each indicator.”
  4. “Analyze the overall impact based on the connections and interactions.”

4. Skeleton-of-Thought (SoT)

概念:SoT是一种先生成答案的骨架(主要要点),然后并行扩展每个要点的方法,大幅减少了解码延迟。

类比:想象你在写一篇文章,先列出主要段落的主题句,然后分别扩展每个主题句的内容。

示例

Prompt: “Write an essay about the benefits of exercise.”

Thought process:

  1. “Outline: Introduction, Physical Benefits, Mental Benefits, Conclusion.”
  2. “Expand each point in parallel:”
    • “Introduction: Exercise is essential for a healthy lifestyle.”
    • “Physical Benefits: Improves cardiovascular health, builds muscle strength.”
    • “Mental Benefits: Reduces stress, improves mood.”
    • “Conclusion: Regular exercise leads to overall well-being.”

5. Algorithm-of-Thought (AoT)

概念:AoT通过结合搜索算法示例,克服CoT的局限性,增强探索和问题解决能力。

类比:想象你在编写一个程序,使用不同的算法来解决问题,每个算法提供不同的路径和策略。

示例

Prompt: “Find the shortest path in a graph using AoT.”

Thought process:

  1. “Consider using Dijkstra’s algorithm for shortest path finding.”
  2. “Evaluate other algorithms like A* and Bellman-Ford.”
  3. “Compare the efficiency and results of each algorithm.”
  4. “Select the algorithm that provides the optimal path with minimal computation.”

 

总结论文表

  • 想要更系统,专业,深入的了解提示词工程就离不开专业的论文,如果自行阅读困难,可以使用工具

     

    来辅助阅读,当然也可以不用并使用免费的 DeepL翻译等….。这个工具是目前最强的在线学术 AI之一,具备翻译,解释,类比,预测与评估等功能…

标题 概要 论文链接
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding 介绍了Skeleton-of-Thought (SoT)的概念,这是一种允许大型语言模型并行解码的方法,通过首先生成答案的框架,然后并行扩展每个要点,显著减少了解码延迟。 https://ar5iv.labs.arxiv.org/html/2307.15337
Graph of Thoughts: Solving Elaborate Problems with Large Language Models 介绍了GoT,这是一种将LLM推理过程建模为有向图的框架,以增强传统CoT和ToT范式之外的问题解决能力。 https://ar5iv.labs.arxiv.org/html/2308.09687
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Large Language Models 提出了一种GoT推理方法,使用图注意网络对思维图进行编码,旨在提高LLM的复杂推理任务能力。 https://ar5iv.labs.arxiv.org/html/2305.16582
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models 讨论了AoT,重点在于通过整合搜索算法启发的搜索过程示例,克服CoT的局限性,以增强探索和问题解决能力。 https://ar5iv.labs.arxiv.org/html/2308.10379
Aggregated Contextual Transformations for High-Resolution Image Inpainting 介绍了AOT-GAN,这是一种基于GAN的模型,利用聚合上下文转换(AOT块)进行改进的高分辨率图像修复。 https://ar5iv.labs.arxiv.org/html/2104.01431
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data 探讨了从标记数据中自动选择CoT示例,以优化模型在不同任务中的性能。 https://ar5iv.labs.arxiv.org/html/2302.12822
Automatic Chain of Thought Prompting in Large Language Models 研究了自动CoT提示,比较了零样本、手动和随机查询生成策略在推理任务中的表现。 https://ar5iv.labs.arxiv.org/html/2210.03493
Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective 提供了关于变压器直接生成复杂推理任务答案能力的理论分析。 https://ar5iv.labs.arxiv.org/html/2305.15408
Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions 介绍了一种将CoT推理与文档检索结合的方法,以提高多步问题的表现。 https://ar5iv.labs.arxiv.org/html/2212.10509
Tab-CoT: Zero-shot Tabular Chain of Thought 提出了一种用于零样本环境的表格式CoT提示,促进更结构化的推理。 https://ar5iv.labs.arxiv.org/html/2305.17812
Faithful Chain-of-Thought Reasoning 描述了一种确保CoT推理过程忠实性的方法,用于各种复杂任务。 https://ar5iv.labs.arxiv.org/html/2301.13379
Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters 进行了一项实证研究,以了解各种因素对CoT提示有效性的影响。 https://ar5iv.labs.arxiv.org/html/2212.10001
Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models 评估了一种结合计划与CoT推理的新提示策略,以提高零样本表现。 https://ar5iv.labs.arxiv.org/html/2305.04091
Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models 介绍了Meta-CoT,这是一种在不同类型推理任务中泛化CoT提示的方法。 https://ar5iv.labs.arxiv.org/html/2310.06692
Large Language Models are Zero-Shot Reasoners 讨论了大型语言模型的固有零样本推理能力,强调了CoT提示的作用。 https://ar5iv.labs.arxiv.org/html/2205.11916
© 版权声明
THE END
喜欢就支持一下吧
点赞9 分享
评论 抢沙发
头像
欢迎您留下宝贵的见解!
提交
头像

昵称

取消
昵称表情代码图片

    暂无评论内容