LangChain - 02 - 快速开始之模型提示和解析
LangSmith
Many of the applications you build with
LangChain
will contain multiple steps withmultiple invocations
of LLM calls.As these applications get more and more complex, it becomes
crucial
to be able to inspect what exactly is going on insideyour chain or agent
.The best way to do this is with
LangSmith
.
许多使用LangChain构建的应用程序将包含多个步骤和多次LLM调用。
随着这些应用程序变得越来越复杂,能够检查链或代理内部到底发生了什么变得至关重要。
实现这一点的最佳方法是使用LangSmith
。
Note that LangSmith is not needed, but it is helpful.
If you do want to use
LangSmith
, after you sign up at the link above, make sure to set your environment variables to start logging traces:
需要注意的是,虽然LangSmith不是必需的,但它很有用。
如果您确实想使用LangSmith,请在上述链接上注册后,确保设置环境变量以开始记录跟踪:
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="..."
LangServe
LangServe helps developers deploy LangChain chains as a REST API. You do not need to use LangServe to use LangChain, but in this guide we’ll show how you can deploy your app with LangServe.
Install with:
LangServe帮助开发人员将LangChain链部署为REST API。您不需要使用LangServe来使用LangChain,但在此指南中,我们将展示如何使用LangServe部署您的应用程序。
安装方式:
pip install "langserve[all]"
Building with LangChain
请你将下面的英文翻译成为中文:
LangChain provides many modules that can be used to build language model applications.
Modules
can be used asstandalones
in simple applications and they can becomposed
for more complex use cases.
Composition
is powered byLangChain Expression Language (LCEL)
, which defines aunified Runnable interface
that many modulesimplement
, making it possible to seamlessly chaincomponents
.
LangChain提供了许多模块,可以用来构建语言模型应用程序。
这些模块可以作为简单的应用程序中的独立组件使用,也可以组合在一起以应对更复杂的用例。
组合是通过LangChain表达式语言(LCEL)实现的,它定义了一个统一的可运行接口,许多模块都实现了这个接口,使得组件之间的无缝链接成为可能。
总结成一句话就是:LangChain当中的modules通过LCEL表达式语言可以组合在一起。
The
simplest
and most common chain contains three things:
最简单的和最常见的链包含三个部分:
LLM/Chat Model
: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them.Prompt Template
: This provides instructions to the language model. This controls what the language model outputs, sounderstanding how to construct prompts and different prompting strategies
is crucial.Output Parser
: These translatethe raw response
from the language model to amore workable format
, making it easy to use the output downstream.
LLM/Chat模型
:语言模型是这里的核心推理引擎。要与LangChain一起工作,你需要了解不同类型的语言模型以及如何与它们一起工作。提示模板
:这为语言模型提供指令。它控制着语言模型的输出,因此理解如何构造提示和不同的提示策略至关重要。输出解析器
:这些将语言模型的原始响应
转换为更易于使用的格式,使其易于在下游使用。
In this guide we’ll cover those three components
individually
, and then go over how tocombine
them.Understanding these concepts will set you up well for being able to use and
customize
LangChain applications.Most LangChain applications allow you to configure the model and/or the prompt, so knowing how to take advantage of this will be a big enabler.
在本指南中,我们将分别介绍这三个组件,然后讨论如何将它们组合在一起。
理解这些概念将使你能够更好地使用和定制LangChain应用程序。
大多数LangChain应用程序允许你配置模型和/或提示,所以知道如何利用这一点将是一个很大的优势。
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!