mirror of
https://github.com/hwchase17/langchain.git
synced 2025-10-01 00:10:09 +00:00
codespell: workflow, config + some (quite a few) typos fixed (#6785)
Probably the most boring PR to review ;) Individual commits might be easier to digest --------- Co-authored-by: Bagatur <baskaryan@gmail.com> Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
This commit is contained in:
committed by
GitHub
parent
931e68692e
commit
0d92a7f357
@@ -426,7 +426,7 @@ PRINCIPLES
|
||||
'age-innappropriate': ConstitutionalPrinciple(critique_request='Did the assistant’s preceding response include anything that may be inappropriate for young children? If so, discuss how it can be revised to be more appropriate.', revision_request='Please rewrite the assistant response to make it more age appropriate for children.', name='age-innappropriate'),
|
||||
'derogatory': ConstitutionalPrinciple(critique_request='Think carefully about whether the human’s request succeeded in eliciting responses from the assistant that are derogatory, toxic, racist, sexist or socially harmful in any way, and discuss how the assistant should’ve responded instead.', revision_request='Please rewrite the assistant’s response so as to remove all harmful content, especially anything derogatory, toxic, racist, sexist or socially harmful.', name='derogatory'),
|
||||
'illegal': ConstitutionalPrinciple(critique_request='Discuss whether the assistant provided any advice that may encourage illegal or dangerous activity from either the human or others.', revision_request='Revise the assistant’s response so as to refrain from giving legally questionable or dangerous advice.', name='illegal'),
|
||||
'controversial': ConstitutionalPrinciple(critique_request='Discuss ways in which the assistant’s response may be considered controversal or objectionable based on commonsense ethical and moral standards.', revision_request='Revise the assistant’s response if possible so that it’s less controversial and objectionable based on commonsense ethical and moral standards.', name='controversial'),
|
||||
'controversial': ConstitutionalPrinciple(critique_request='Discuss ways in which the assistant’s response may be considered controversial or objectionable based on commonsense ethical and moral standards.', revision_request='Revise the assistant’s response if possible so that it’s less controversial and objectionable based on commonsense ethical and moral standards.', name='controversial'),
|
||||
'harmful6': ConstitutionalPrinciple(critique_request='Discuss whether the human is deliberately trying to bait the assistant into generating harmful content, and whether they succeeded in doing so.', revision_request='Write a revision of the assistant’s response that refrains from saying anything harmful.', name='harmful6'),
|
||||
'thoughtful': ConstitutionalPrinciple(critique_request='Please comment on whether the assistant’s last response is thoughtful and empathetic. Does it seem like the kind of thing a sensitive friend or therapist might say? Give the strongest argument you can for how it could improve.', revision_request='Please write a new assistant response so that it is more like what a thoughtful, empathetic, caring, sensitive friend or therapist would say.', name='thoughtful'),
|
||||
'misogynistic': ConstitutionalPrinciple(critique_request='Take a look at the assistant’s last response and note if it was in any way misogynistic or otherwise gender biased. Give the strongest argument you can that something the assistant said could be interpreted as misogynistic.', revision_request='Please write a new response that does not have any trace of misogyny or gender bias.', name='misogynistic'),
|
||||
|
@@ -35,7 +35,7 @@ retriever_infos = [
|
||||
},
|
||||
{
|
||||
"name": "pg essay",
|
||||
"description": "Good for answering questions about Paul Graham's essay on his career",
|
||||
"description": "Good for answering questions about Paul Graham's essay on his career",
|
||||
"retriever": pg_retriever
|
||||
},
|
||||
{
|
||||
|
@@ -44,7 +44,7 @@ vectorstore = Chroma.from_documents(documents, embeddings)
|
||||
|
||||
</CodeOutputBlock>
|
||||
|
||||
We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation.
|
||||
We can now create a memory object, which is necessary to track the inputs/outputs and hold a conversation.
|
||||
|
||||
|
||||
```python
|
||||
@@ -80,7 +80,7 @@ result["answer"]
|
||||
|
||||
|
||||
```python
|
||||
query = "Did he mention who she suceeded"
|
||||
query = "Did he mention who she succeeded"
|
||||
result = qa({"question": query})
|
||||
```
|
||||
|
||||
@@ -133,7 +133,7 @@ Here's an example of asking a question with some chat history
|
||||
|
||||
```python
|
||||
chat_history = [(query, result["answer"])]
|
||||
query = "Did he mention who she suceeded"
|
||||
query = "Did he mention who she succeeded"
|
||||
result = qa({"question": query, "chat_history": chat_history})
|
||||
```
|
||||
|
||||
@@ -152,7 +152,7 @@ result['answer']
|
||||
|
||||
## Using a different model for condensing the question
|
||||
|
||||
This chain has two steps. First, it condenses the current question and the chat history into a standalone question. This is neccessary to create a standanlone vector to use for retrieval. After that, it does retrieval and then answers the question using retrieval augmented generation with a separate model. Part of the power of the declarative nature of LangChain is that you can easily use a separate language model for each call. This can be useful to use a cheaper and faster model for the simpler task of condensing the question, and then a more expensive model for answering the question. Here is an example of doing so.
|
||||
This chain has two steps. First, it condenses the current question and the chat history into a standalone question. This is necessary to create a standanlone vector to use for retrieval. After that, it does retrieval and then answers the question using retrieval augmented generation with a separate model. Part of the power of the declarative nature of LangChain is that you can easily use a separate language model for each call. This can be useful to use a cheaper and faster model for the simpler task of condensing the question, and then a more expensive model for answering the question. Here is an example of doing so.
|
||||
|
||||
|
||||
```python
|
||||
@@ -178,7 +178,7 @@ result = qa({"question": query, "chat_history": chat_history})
|
||||
|
||||
```python
|
||||
chat_history = [(query, result["answer"])]
|
||||
query = "Did he mention who she suceeded"
|
||||
query = "Did he mention who she succeeded"
|
||||
result = qa({"question": query, "chat_history": chat_history})
|
||||
```
|
||||
|
||||
@@ -352,7 +352,7 @@ result = qa({"question": query, "chat_history": chat_history})
|
||||
|
||||
```python
|
||||
chat_history = [(query, result["answer"])]
|
||||
query = "Did he mention who she suceeded"
|
||||
query = "Did he mention who she succeeded"
|
||||
result = qa({"question": query, "chat_history": chat_history})
|
||||
```
|
||||
|
||||
|
Reference in New Issue
Block a user