LangChain 64 深入理解LangChain 表达式语言27 添加审查 Moderation LangChain Expression Language (LCEL)
2024-01-09 10:19:48
LangChain系列文章
- LangChain 50 深入理解LangChain 表达式语言十三 自定义pipeline函数 LangChain Expression Language (LCEL)
- LangChain 51 深入理解LangChain 表达式语言十四 自动修复配置RunnableConfig LangChain Expression Language (LCEL)
- LangChain 52 深入理解LangChain 表达式语言十五 Bind runtime args绑定运行时参数 LangChain Expression Language (LCEL)
- LangChain 53 深入理解LangChain 表达式语言十六 Dynamically route动态路由 LangChain Expression Language (LCEL)
- LangChain 54 深入理解LangChain 表达式语言十七 Chains Route动态路由 LangChain Expression Language (LCEL)
- LangChain 55 深入理解LangChain 表达式语言十八 function Route自定义动态路由 LangChain Expression Language (LCEL)
- LangChain 56 深入理解LangChain 表达式语言十九 config运行时选择大模型LLM LangChain Expression Language (LCEL)
- LangChain 57 深入理解LangChain 表达式语言二十 LLM Fallbacks速率限制备份大模型 LangChain Expression Language (LCEL)
- LangChain 58 深入理解LangChain 表达式语言21 Memory消息历史 LangChain Expression Language (LCEL)
- LangChain 59 深入理解LangChain 表达式语言22 multiple chains多个链交互 LangChain Expression Language (LCEL)
- LangChain 60 深入理解LangChain 表达式语言23 multiple chains链透传参数 LangChain Expression Language (LCEL)
- LangChain 61 深入理解LangChain 表达式语言24 multiple chains链透传参数 LangChain Expression Language (LCEL)
- LangChain 62 深入理解LangChain 表达式语言25 agents代理 LangChain Expression Language (LCEL)
- LangChain 63 深入理解LangChain 表达式语言26 生成代码code并执行 LangChain Expression Language (LCEL)
1. 添加审查 Moderation
这显示了如何在您的LLM应用程序周围添加审查(或其他保障措施)。
代码实现
from langchain.prompts import PromptTemplate
from langchain_community.chat_models import ChatOpenAI
from langchain_core.runnables import ConfigurableField
# We add in a string output parser here so the outputs between the two are the same type
from langchain_core.output_parsers import StrOutputParser
from langchain.chains import OpenAIModerationChain
from langchain.prompts import ChatPromptTemplate
from langchain_community.llms import OpenAI
from dotenv import load_dotenv # 导入从 .env 文件加载环境变量的函数
load_dotenv() # 调用函数实际加载环境变量
from langchain.globals import set_debug # 导入在 langchain 中设置调试模式的函数
set_debug(True) # 启用 langchain 的调试模式
moderate = OpenAIModerationChain()
model = OpenAI()
prompt = ChatPromptTemplate.from_messages([("system", "repeat after me: {input}")])
chain = prompt | model
normal_response = chain.invoke({"input": "you are stupid"})
print('normal_response >> ', normal_response)
moderated_chain = chain | moderate
moderated_response = moderated_chain.invoke({"input": "you are stupid"})
print('moderated_response >> ', moderated_response)
运行输出
You tried to access openai.Moderation, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
需要安装老版本的 openai pip install openai==0.28
输出结果
(.venv) zgpeace@zgpeaces-MacBook-Pro git:(develop) ?% python LCEL/moderation.py ~/Workspace/LLM/langchain-llm-app
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
"input": "you are stupid"
}
[chain/start] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
"input": "you are stupid"
}
[chain/end] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] [7ms] Exiting Prompt run with output:
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"ChatPromptValue"
],
"kwargs": {
"messages": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"SystemMessage"
],
"kwargs": {
"content": "repeat after me: you are stupid",
"additional_kwargs": {}
}
}
]
}
}
[llm/start] [1:chain:RunnableSequence > 3:llm:OpenAI] Entering LLM run with input:
{
"prompts": [
"System: repeat after me: you are stupid"
]
}
[llm/end] [1:chain:RunnableSequence > 3:llm:OpenAI] [1.97s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "\n\nI am stupid. ",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "Generation"
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 9,
"completion_tokens": 6,
"total_tokens": 15
},
"model_name": "gpt-3.5-turbo-instruct"
},
"run": null
}
[chain/end] [1:chain:RunnableSequence] [1.99s] Exiting Chain run with output:
{
"output": "\n\nI am stupid. "
}
normal_response >>
I am stupid.
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
"input": "you are stupid"
}
[chain/start] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
"input": "you are stupid"
}
[chain/end] [1:chain:RunnableSequence > 2:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"prompts",
"chat",
"ChatPromptValue"
],
"kwargs": {
"messages": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"SystemMessage"
],
"kwargs": {
"content": "repeat after me: you are stupid",
"additional_kwargs": {}
}
}
]
}
}
[llm/start] [1:chain:RunnableSequence > 3:llm:OpenAI] Entering LLM run with input:
{
"prompts": [
"System: repeat after me: you are stupid"
]
}
[llm/end] [1:chain:RunnableSequence > 3:llm:OpenAI] [1.47s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "Generation"
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 9,
"completion_tokens": 31,
"total_tokens": 40
},
"model_name": "gpt-3.5-turbo-instruct"
},
"run": null
}
[chain/start] [1:chain:RunnableSequence > 4:chain:OpenAIModerationChain] Entering Chain run with input:
{
"input": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid."
}
[chain/end] [1:chain:RunnableSequence > 4:chain:OpenAIModerationChain] [1.02s] Exiting Chain run with output:
{
"output": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid."
}
[chain/end] [1:chain:RunnableSequence] [2.50s] Exiting Chain run with output:
{
"input": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.",
"output": "\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid."
}
moderated_response >> {'input': '\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.', 'output': '\n\nI am not stupid, I am a computer program designed to assist and communicate with users. I do not possess the capability to be intelligent or stupid.'}
代码
https://github.com/zgpeace/pets-name-langchain/tree/develop
参考
https://python.langchain.com/docs/expression_language/cookbook/moderation
文章来源:https://blog.csdn.net/zgpeace/article/details/135440742
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!