mirror of
https://github.com/hwchase17/langchain.git
synced 2025-07-01 10:54:15 +00:00
cleanup getting started (#15450)
This commit is contained in:
parent
2bbee894bb
commit
51dcb89a72
@ -143,6 +143,10 @@ chain = prompt | llm
|
|||||||
|
|
||||||
We can now invoke it and ask the same question. It still won't know the answer, but it should respond in a more proper tone for a technical writer!
|
We can now invoke it and ask the same question. It still won't know the answer, but it should respond in a more proper tone for a technical writer!
|
||||||
|
|
||||||
|
```python
|
||||||
|
chain.invoke({"input": "how can langsmith help with testing?"})
|
||||||
|
```
|
||||||
|
|
||||||
The output of a ChatModel (and therefore, of this chain) is a message. However, it's often much more convenient to work with strings. Let's add a simple output parser to convert the chat message to a string.
|
The output of a ChatModel (and therefore, of this chain) is a message. However, it's often much more convenient to work with strings. Let's add a simple output parser to convert the chat message to a string.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@ -204,7 +208,7 @@ embeddings = OpenAIEmbeddings()
|
|||||||
```
|
```
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem value="local" label="Ollama">
|
<TabItem value="local" label="Local">
|
||||||
|
|
||||||
Make sure you have Ollama running (same set up as with the LLM).
|
Make sure you have Ollama running (same set up as with the LLM).
|
||||||
|
|
||||||
@ -284,7 +288,7 @@ We can now invoke this chain. This returns a dictionary - the response from the
|
|||||||
response = retrieval_chain.invoke({"input": "how can langsmith help with testing?"})
|
response = retrieval_chain.invoke({"input": "how can langsmith help with testing?"})
|
||||||
print(response["answer"])
|
print(response["answer"])
|
||||||
|
|
||||||
// LangSmith offers several features that can help with testing:...
|
# LangSmith offers several features that can help with testing:...
|
||||||
```
|
```
|
||||||
|
|
||||||
This answer should be much more accurate!
|
This answer should be much more accurate!
|
||||||
@ -326,7 +330,7 @@ We can test this out by passing in an instance where the user is asking a follow
|
|||||||
from langchain_core.messages import HumanMessage, AIMessage
|
from langchain_core.messages import HumanMessage, AIMessage
|
||||||
|
|
||||||
chat_history = [HumanMessage(content="Can LangSmith help test my LLM applications?"), AIMessage(content="Yes!")]
|
chat_history = [HumanMessage(content="Can LangSmith help test my LLM applications?"), AIMessage(content="Yes!")]
|
||||||
retrieval_chain.invoke({
|
retriever_chain.invoke({
|
||||||
"chat_history": chat_history,
|
"chat_history": chat_history,
|
||||||
"input": "Tell me how"
|
"input": "Tell me how"
|
||||||
})
|
})
|
||||||
|
Loading…
Reference in New Issue
Block a user