
Conversation Buffer Memory¶
The "base memory class" seen in the previous example is now put to use in a higher-level abstraction provided by LangChain:
In [1]:
Copied!
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
In [2]:
Copied!
from cqlsession import getCQLSession, getCQLKeyspace
cqlMode = 'astra_db' # 'astra_db'/'local'
session = getCQLSession(mode=cqlMode)
keyspace = getCQLKeyspace(mode=cqlMode)
from cqlsession import getCQLSession, getCQLKeyspace
cqlMode = 'astra_db' # 'astra_db'/'local'
session = getCQLSession(mode=cqlMode)
keyspace = getCQLKeyspace(mode=cqlMode)
In [3]:
Copied!
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=session,
keyspace=keyspace,
ttl_seconds = 3600,
)
message_history.clear()
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=session,
keyspace=keyspace,
ttl_seconds = 3600,
)
message_history.clear()
Use in a ConversationChain¶
Create a Memory¶
The Cassandra message history is specified:
In [4]:
Copied!
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
Language model¶
Below is the logic to instantiate the LLM of choice. We choose to leave it in the notebooks for clarity.
In [5]:
Copied!
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from VertexAI')
elif llmProvider == 'OpenAI':
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
else:
raise ValueError('Unknown LLM provider.')
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from VertexAI')
elif llmProvider == 'OpenAI':
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM from OpenAI
Create a chain¶
As the conversation proceeds, a growing history of past exchanges finds it way automatically to the prompt that the LLM receives:
In [6]:
Copied!
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
In [7]:
Copied!
conversation.predict(input="Hello, how can I roast an apple?")
conversation.predict(input="Hello, how can I roast an apple?")
> Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: > Finished chain.
Out[7]:
" Hi there! Roasting apples is a great way to bring out their natural sweetness. You can roast them in the oven at 350 degrees for about 25-30 minutes, or until they're fork-tender. You can also add a little bit of butter to the top of the apples to make them extra delicious. Does that help?"
In [8]:
Copied!
conversation.predict(input="Can I do it on a bonfire?")
conversation.predict(input="Can I do it on a bonfire?")
> Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! Roasting apples is a great way to bring out their natural sweetness. You can roast them in the oven at 350 degrees for about 25-30 minutes, or until they're fork-tender. You can also add a little bit of butter to the top of the apples to make them extra delicious. Does that help? Human: Can I do it on a bonfire? AI: > Finished chain.
Out[8]:
" Sure, you can roast apples on a bonfire, but you'll need to make sure the fire is at a low heat. You'll want to wrap the apples in foil and place them near the side of the fire where the heat is lower. Roast the apples for about 10-15 minutes, or until they're fork-tender. Don't forget to add a pat of butter before wrapping them up!"
In [9]:
Copied!
conversation.predict(input="What about a microwave, would the apple taste good?")
conversation.predict(input="What about a microwave, would the apple taste good?")
> Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! Roasting apples is a great way to bring out their natural sweetness. You can roast them in the oven at 350 degrees for about 25-30 minutes, or until they're fork-tender. You can also add a little bit of butter to the top of the apples to make them extra delicious. Does that help? Human: Can I do it on a bonfire? AI: Sure, you can roast apples on a bonfire, but you'll need to make sure the fire is at a low heat. You'll want to wrap the apples in foil and place them near the side of the fire where the heat is lower. Roast the apples for about 10-15 minutes, or until they're fork-tender. Don't forget to add a pat of butter before wrapping them up! Human: What about a microwave, would the apple taste good? AI: > Finished chain.
Out[9]:
" Unfortunately, microwaves don't get hot enough to properly roast apples. The best way to get the best flavor out of your apples is to roast them in the oven."
In [10]:
Copied!
message_history.messages
message_history.messages
Out[10]:
[HumanMessage(content='Hello, how can I roast an apple?', additional_kwargs={}, example=False), AIMessage(content=" Hi there! Roasting apples is a great way to bring out their natural sweetness. You can roast them in the oven at 350 degrees for about 25-30 minutes, or until they're fork-tender. You can also add a little bit of butter to the top of the apples to make them extra delicious. Does that help?", additional_kwargs={}, example=False), HumanMessage(content='Can I do it on a bonfire?', additional_kwargs={}, example=False), AIMessage(content=" Sure, you can roast apples on a bonfire, but you'll need to make sure the fire is at a low heat. You'll want to wrap the apples in foil and place them near the side of the fire where the heat is lower. Roast the apples for about 10-15 minutes, or until they're fork-tender. Don't forget to add a pat of butter before wrapping them up!", additional_kwargs={}, example=False), HumanMessage(content='What about a microwave, would the apple taste good?', additional_kwargs={}, example=False), AIMessage(content=" Unfortunately, microwaves don't get hot enough to properly roast apples. The best way to get the best flavor out of your apples is to roast them in the oven.", additional_kwargs={}, example=False)]
Manually tinkering with the prompt¶
You can craft your own prompt (through a PromptTemplate
object) and still take advantage of the chat memory handling by LangChain:
In [11]:
Copied!
from langchain import LLMChain, PromptTemplate
from langchain import LLMChain, PromptTemplate
In [12]:
Copied!
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
In [13]:
Copied!
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=session,
keyspace=keyspace,
)
f_message_history.clear()
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=session,
keyspace=keyspace,
)
f_message_history.clear()
In [14]:
Copied!
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
In [15]:
Copied!
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
In [16]:
Copied!
llm_chain.predict(human_input="Tell me about springs")
llm_chain.predict(human_input="Tell me about springs")
> Entering new chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: > Finished chain.
Out[16]:
' Springs are a great way to get a jump start on the season. They give you a chance to come out of hibernation and enjoy the warmer weather! Plus, you can use them for all sorts of fun activities like trampolining, bouncy castle jumping, or even just taking a nice relaxing stroll.'
In [17]:
Copied!
llm_chain.predict(human_input='Er ... I mean the other type actually.')
llm_chain.predict(human_input='Er ... I mean the other type actually.')
> Entering new chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: Springs are a great way to get a jump start on the season. They give you a chance to come out of hibernation and enjoy the warmer weather! Plus, you can use them for all sorts of fun activities like trampolining, bouncy castle jumping, or even just taking a nice relaxing stroll. Human: Er ... I mean the other type actually. AI: > Finished chain.
Out[17]:
" Oh, you mean the mechanical kind? Well, they're really helpful when it comes to saving energy and providing support. They're like little powerhouses that help keep things moving and can help you get the job done faster. And they always come in handy when you have a lot of weight to bear!"