Compare commits

..

268 Commits

Author SHA1 Message Date
Bagatur
cdfe2c96c5 bump 263 (#9156) 2023-08-12 12:36:44 -07:00
Leonid Ganeline
19f504790e docstrings: document_loaders consitency 2 (#9148)
This is Part 2. See #9139 (Part 1).
2023-08-11 16:25:40 -07:00
Harrison Chase
1b58460fe3 update keys for chain (#5164)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-11 16:25:13 -07:00
Eugene Yurtsev
aca8cb5fba API Reference: Do not document private modules (#9042)
This PR prevents documentation of private modules in the API reference
2023-08-11 15:58:14 -07:00
胡亮
7edf4ca396 Support multi gpu inference for HuggingFaceEmbeddings (#4732)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-11 15:55:44 -07:00
UmerHA
8aab39e3ce Added SmartGPT workflow (issue #4463) (#4816)
# Added SmartGPT workflow by providing SmartLLM wrapper around LLMs
Edit:
As @hwchase17 suggested, this should be a chain, not an LLM. I have
adapted the PR.

It is used like this:
```
from langchain.prompts import PromptTemplate
from langchain.chains import SmartLLMChain
from langchain.chat_models import ChatOpenAI

hard_question = "I have a 12 liter jug and a 6 liter jug. I want to measure 6 liters. How do I do it?"
hard_question_prompt = PromptTemplate.from_template(hard_question)

llm = ChatOpenAI(model_name="gpt-4")
prompt = PromptTemplate.from_template(hard_question)
chain = SmartLLMChain(llm=llm, prompt=prompt, verbose=True)

chain.run({})
```


Original text: 
Added SmartLLM wrapper around LLMs to allow for SmartGPT workflow (as in
https://youtu.be/wVzuvf9D9BU). SmartLLM can be used wherever LLM can be
used. E.g:

```
smart_llm = SmartLLM(llm=OpenAI())
smart_llm("What would be a good company name for a company that makes colorful socks?")
```
or
```
smart_llm = SmartLLM(llm=OpenAI())
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=smart_llm, prompt=prompt)
chain.run("colorful socks")
```

SmartGPT consists of 3 steps:

1. Ideate - generate n possible solutions ("ideas") to user prompt
2. Critique - find flaws in every idea & select best one
3. Resolve - improve upon best idea & return it

Fixes #4463

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

- @hwchase17
- @agola11

Twitter: [@UmerHAdil](https://twitter.com/@UmerHAdil) | Discord:
RicChilligerDude#7589

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-11 15:44:27 -07:00
Lucas Pickup
1d3735a84c Ensure deployment_id is set to provided deployment, required for Azure OpenAI. (#5002)
# Ensure deployment_id is set to provided deployment, required for Azure
OpenAI.
---------

Co-authored-by: Lucas Pickup <lupickup@microsoft.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-11 15:43:01 -07:00
Bagatur
45741bcc1b Bagatur/vectara nit (#9140)
Co-authored-by: Ofer Mendelevitch <ofer@vectara.com>
2023-08-11 15:32:03 -07:00
Dominick DEV
9b64932e55 Add LangChain utility for real-time crypto exchange prices (#4501)
This commit adds the LangChain utility which allows for the real-time
retrieval of cryptocurrency exchange prices. With LangChain, users can
easily access up-to-date pricing information by running the command
".run(from_currency, to_currency)". This new feature provides a
convenient way to stay informed on the latest exchange rates and make
informed decisions when trading crypto.


---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-11 14:45:06 -07:00
Joshua Sundance Bailey
eaa505fb09 Create ArcGISLoader & example notebook (#8873)
- Description: Adds the ArcGISLoader class to
`langchain.document_loaders`
  - Allows users to load data from ArcGIS Online, Portal, and similar
- Users can authenticate with `arcgis.gis.GIS` or retrieve public data
anonymously
  - Uses the `arcgis.features.FeatureLayer` class to retrieve the data
  - Defines the most relevant keywords arguments and accepts `**kwargs`
- Dependencies: Using this class requires `arcgis` and, optionally,
`bs4.BeautifulSoup`.

Tagging maintainers:
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-11 14:33:40 -07:00
Bagatur
e21152358a fix (#9145) 2023-08-11 13:58:23 -07:00
Leonid Ganeline
edb585228d docstrings: document_loaders consitency (#9139)
Formatted docstrings from different formats to consistent format, lile:
>Loads processed docs from Docugami.
"Load from `Docugami`."

>Loader that uses Unstructured to load HTML files.
"Load `HTML` files using `Unstructured`."

>Load documents from a directory.
"Load from a directory."
 
- `Load` - no `Loads`
- DocumentLoader always loads Documents, so no more
"documents/docs/texts/ etc"
- integrated systems and APIs enclosed in backticks,
2023-08-11 13:09:31 -07:00
Aashish Saini
0aabded97f Updating interactive walkthrough link in index.md to resolve 404 error (#9063)
Updated interactive walkthrough link in index.md to resolve 404 error.
Also, expressing deep gratitude to LangChain library developers for
their exceptional efforts 🥇 .

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-11 13:08:56 -07:00
Markus Schiffer
00bf472265 Fix for SVM retriever discarding document metadata (#9141)
As stated in the title the SVM retriever discarded the metadata of
passed in docs. This code fixes that. I also added one unit test that
should test that.
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-11 13:08:17 -07:00
Bagatur
bace17e0aa rm integration deps (#9142) 2023-08-11 12:43:08 -07:00
Eugene Yurtsev
44bc89b7bf Support a few list like operations on ChatPromptTemplate (#9077)
Make it easier to work with chat prompt template
2023-08-11 14:49:51 -04:00
Hai The Dude
e4418d1b7e Added new use case docs for Web Scraping, Chromium loader, BS4 transformer (#8732)
- Description: Added a new use case category called "Web Scraping", and
a tutorial to scrape websites using OpenAI Functions Extraction chain to
the docs.
  - Tag maintainer:@baskaryan @hwchase17 ,
- Twitter handle: https://www.linkedin.com/in/haiphunghiem/ (I'm on
LinkedIn mostly)

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
2023-08-11 11:46:59 -07:00
sseide
6cb763507c add basic support for redis cluster server (#9128)
This change updates the central utility class to recognize a Redis
cluster server after connection and returns an new cluster aware Redis
client. The "normal" Redis client would not be able to talk to a cluster
node because keys might be stored on other shards of the Redis cluster
and therefor not readable or writable.

With this patch clients do not need to know what Redis server it is,
they just connect though the same API calls for standalone and cluster
server.

There are no dependencies added due to this MR.

Remark - with current redis-py client library (4.6.0) a cluster cannot
be used as VectorStore. It can be used for other use-cases. There is a
bug / missing feature(?) in the Redis client breaking the VectorStore
implementation. I opened an issue at the client library too
(redis/redis-py#2888) to fix this. As soon as this is fixed in
`redis-py` library it should be usable there too.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-11 11:37:44 -07:00
David Duong
6d03f8b5d8 Add serialisable support for Replicate (#8525) 2023-08-11 11:35:21 -07:00
niklub
16af5f8690 Add LabelStudio integration (#8880)
This PR introduces [Label Studio](https://labelstud.io/) integration
with LangChain via `LabelStudioCallbackHandler`:

- sending data to the Label Studio instance
- labeling dataset for supervised LLM finetuning
- rating model responses
- tracking and displaying chat history
- support for custom data labeling workflow

### Example

```
chat_llm = ChatOpenAI(callbacks=[LabelStudioCallbackHandler(mode="chat")])
chat_llm([
    SystemMessage(content="Always use emojis in your responses."),
        HumanMessage(content="Hey AI, how's your day going?"),
    AIMessage(content="🤖 I don't have feelings, but I'm running smoothly! How can I help you today?"),
        HumanMessage(content="I'm feeling a bit down. Any advice?"),
    AIMessage(content="🤗 I'm sorry to hear that. Remember, it's okay to seek help or talk to someone if you need to. 💬"),
        HumanMessage(content="Can you tell me a joke to lighten the mood?"),
    AIMessage(content="Of course! 🎭 Why did the scarecrow win an award? Because he was outstanding in his field! 🌾"),
        HumanMessage(content="Haha, that was a good one! Thanks for cheering me up."),
    AIMessage(content="Always here to help! 😊 If you need anything else, just let me know."),
        HumanMessage(content="Will do! By the way, can you recommend a good movie?"),
])
```

<img width="906" alt="image"
src="https://github.com/langchain-ai/langchain/assets/6087484/0a1cf559-0bd3-4250-ad96-6e71dbb1d2f3">


### Dependencies
- [label-studio](https://pypi.org/project/label-studio/)
- [label-studio-sdk](https://pypi.org/project/label-studio-sdk/)

https://twitter.com/labelstudiohq

---------

Co-authored-by: nik <nik@heartex.net>
2023-08-11 11:24:10 -07:00
Bagatur
8cb2594562 Bagatur/dingo (#9079)
Co-authored-by: gary <1625721671@qq.com>
2023-08-11 10:54:45 -07:00
Jacques Arnoux
926c64da60 Fix web research retriever for unknown links in results (#9115)
Fixes an issue with web research retriever for unknown links in results.
This is currently making the retrieve crash sometimes.

@rlancemartin
2023-08-11 10:50:37 -07:00
Manuel Soria
31cfc00845 Code understanding use case (#8801)
Code understanding docs

---------

Co-authored-by: Manuel Soria <manuel.soria@greyscaleai.com>
Co-authored-by: Lance Martin <lance@langchain.dev>
2023-08-11 10:16:05 -07:00
Alvaro Bartolome
f7ae183f40 ArgillaCallbackHandler to properly use default values for api_url and api_key (#9113)
As of the recent PR at #9043, after some testing we've realised that the
default values were not being used for `api_key` and `api_url`. Besides
that, the default for `api_key` was set to `argilla.apikey`, but since
the default values are intended for people using the Argilla Quickstart
(easy to run and setup), the defaults should be instead `owner.apikey`
if using Argilla 1.11.0 or higher, or `admin.apikey` if using a lower
version of Argilla.

Additionally, we've removed the f-string replacements from the
docstrings.

---------

Co-authored-by: Gabriel Martin <gabriel@argilla.io>
2023-08-11 09:37:06 -07:00
Bagatur
0e5d09d0da dalle nb fix (#9125) 2023-08-11 08:21:48 -07:00
Francisco Ingham
9249d305af tagging docs refactor (#8722)
refactor of tagging use case according to new format

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
2023-08-11 08:06:07 -07:00
Bagatur
01ef786e7e bump 262 (#9108) 2023-08-11 01:29:07 -07:00
Bagatur
3b754b5461 Bagatur/filter metadata (#9015)
Co-authored-by: Matt Robinson <mrobinson@unstructuredai.io>
2023-08-11 01:10:00 -07:00
Aayush Shah
a429145420 Minor grammatical error (#9102)
Have corrected a grammatical error in:
https://python.langchain.com/docs/modules/model_io/models/llms/ document
😄
2023-08-11 01:01:40 -07:00
Kim Minjong
7f0e847c13 Update pydantic format instruction prompt (#9095)
- remove unopened bracket
2023-08-11 00:22:13 -07:00
Ashutosh Sanzgiri
991b448dfc minor edits (#9093)
Description:

Minor edit to PR#845

Thanks!
2023-08-10 23:40:36 -07:00
Bagatur
3ab4e21579 fix json tool (#9096) 2023-08-10 23:39:25 -07:00
Sam Groenjes
2184e3a400 Fix IndexError when input_list is Empty in prep_prompts (#5769)
This MR corrects the IndexError arising in prep_prompts method when no
documents are returned from a similarity search.

Fixes #1733 
Co-authored-by: Sam Groenjes <sam.groenjes@darkwolfsolutions.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-10 22:50:39 -07:00
Chenyu Zhao
c0acbdca1b Update Fireworks model names (#9085) 2023-08-10 19:23:42 -07:00
Charles Lanahan
a2588d6c57 Update openai embeddings notebook with correct embedding model in section 2 (#5831)
In second section it looks like a copy/paste from the first section and
doesn't include the specific embedding model mentioned in the example so
I added it for clarity.
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-10 19:02:10 -07:00
Bagatur
b80e3825a6 Bagatur/pinecone by vector (#9087)
Co-authored-by: joseph <joe@outverse.com>
2023-08-10 18:28:55 -07:00
Nikhil Kumar
6abb2c2c08 Buffer method of ConversationTokenBufferMemory should be able to return messages as string (#7057)
### Description:
`ConversationBufferTokenMemory` should have a simple way of returning
the conversation messages as a string.

Previously to complete this, you would only have the option to return
memory as an array through the buffer method and call
`get_buffer_string` by importing it from `langchain.schema`, or use the
`load_memory_variables` method and key into `self.memory_key`.

### Maintainer
@hwchase17

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-10 18:17:22 -07:00
William FH
57dd4daa9a Add string example mapper (#9086)
Now that we accept any runnable or arbitrary function to evaluate, we
don't always look up the input keys. If an evaluator requires
references, we should try to infer if there's one key present. We only
have delayed validation here but it's better than nothing
2023-08-10 17:07:02 -07:00
Josh Phillips
5fc07fa524 change id column type to uuid to match function (#7456)
The table creation process in these examples commands do not match what
the recently updated functions in these example commands is looking for.
This change updates the type in the table creation command.
Issue Number for my report of the doc problem #7446
@rlancemartin and @eyurtsev I believe this is your area
Twitter: @j1philli

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-10 16:57:19 -07:00
Bidhan Roy
02430e25b6 BagelDB (bageldb.ai), VectorStore integration. (#8971)
- **Description**: [BagelDB](bageldb.ai) a collaborative vector
database. Integrated the bageldb PyPi package with langchain with
related tests and code.

  - **Issue**: Not applicable.
  - **Dependencies**: `betabageldb` PyPi package.
  - **Tag maintainer**: @rlancemartin, @eyurtsev, @baskaryan
  - **Twitter handle**: bageldb_ai (https://twitter.com/BagelDB_ai)
  
We ran `make format`, `make lint` and `make test` locally.

Followed the contribution guideline thoroughly
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

---------

Co-authored-by: Towhid1 <nurulaktertowhid@gmail.com>
2023-08-10 16:48:36 -07:00
DJ Atha
ee52482db8 Fix issue 7445 (#7635)
Description: updated BabyAGI examples and experimental to append the
iteration to the result id to fix error storing data to vectorstore.
Issue: 7445
Dependencies: no
Tag maintainer: @eyurtsev
This fix worked for me locally. Happy to take some feedback and iterate
on a better solution. I was considering appending a uuid instead but
didn't want to over complicate the example.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-10 16:29:31 -07:00
Harrison Chase
bb6fbf4c71 openai adapters (#8988)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
2023-08-10 16:08:50 -07:00
Harrison Chase
45f0f9460a add async for python repl (#9080) 2023-08-10 16:07:06 -07:00
Neil Murphy
105c787e5a Add convenience methods to ConversationBufferMemory and ConversationB… (#8981)
Add convenience methods to `ConversationBufferMemory` and
`ConversationBufferWindowMemory` to get buffer either as messages or as
string.

Helps when `return_messages` is set to `True` but you want access to the
messages as a string, and vice versa.

@hwchase17

One use case: Using a `MultiPromptRouter` where `default_chain` is
`ConversationChain`, but destination chains are `LLMChains`. Injecting
chat memory into prompts for destination chains prints a stringified
`List[Messages]` in the prompt, which creates a lot of noise. These
convenience methods allow caller to choose either as needed.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-10 15:45:30 -07:00
Zend
6221eb5974 Recursive url loader w/ test (#8813)
Description: Due to some issue on the test, this is a separate PR with
the test for #8502

Tag maintainer: @rlancemartin

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-10 14:50:31 -07:00
Junlin Zhou
cb5fb751e9 Enhance regex of structured_chat agents' output parser (#8965)
Current regex only extracts agent's action between '` ``` ``` `', this
commit will extract action between both '` ```json ``` `' and '` ``` ```
`'

This is very similar to #7511 
Co-authored-by: zjl <junlinzhou@yzbigdata.com>
2023-08-10 14:26:07 -07:00
Bagatur
16bd328aab Use Embeddings in pinecone (#8982)
cc @eyurtsev @olivier-lacroix @jamescalam 

redo of #2741
2023-08-10 14:22:41 -07:00
Piyush Jain
8eea46ed0e Bedrock embeddings async methods (#9024)
## Description
This PR adds the `aembed_query` and `aembed_documents` async methods for
improving the embeddings generation for large documents. The
implementation uses asyncio tasks and gather to achieve concurrency as
there is no bedrock async API in boto3.

### Maintainers
@agola11 
@aarora79  

### Open questions
To avoid throttling from the Bedrock API, should there be an option to
limit the concurrency of the calls?
2023-08-10 14:21:03 -07:00
Eugene Yurtsev
67ca187560 Fix incorrect code blocks in documentation (#9060)
Fixes incorrect code block syntax in doc strings.
2023-08-10 14:13:42 -07:00
Eugene Yurtsev
46f3428cb3 Fix more incorrect code blocks in doc strings (#9073)
Fix 2 more incorrect code blocks in strings
2023-08-10 13:49:15 -07:00
Nicolas
e3fb11bc10 docs: (Mendable Search) Fixes stuck when tabbing out issue (#9074)
This fixes Mendable not completing when tabbing out and fixes the
duplicate message issue as well.
2023-08-10 13:46:06 -07:00
Bagatur
1edead28b8 Add docs community page (#8992)
Co-authored-by: briannawolfson <brianna.wolfson@gmail.com>
2023-08-10 13:41:35 -07:00
Eugene Yurtsev
a5a4c53280 RedisStore: Update init and Documentation updates (#9044)
* Update Redis Store to support init from parameters
* Update notebook to show how to use redis store, and some fixes in
documentation
2023-08-10 15:30:29 -04:00
Bagatur
80b98812e1 Update README.md 2023-08-10 12:01:20 -07:00
Leonid Ganeline
fcbbddedae ArxivLoader fix for issue 9046 (#9061)
Fixed #9046 
Added ut-s for this fix.
 @eyurtsev
2023-08-10 14:59:39 -04:00
Mike Lambert
e94a5d753f Move from test to supported claude-instant-1 model (#9066)
Moves from "test" model to "claude-instant-1" model which is supported
and has actual capacity
2023-08-10 11:57:28 -07:00
Eugene Yurtsev
b7bc8ec87f Add excludes to FileSystemBlobLoader (#9064)
Add option to specify exclude patterns.

https://github.com/langchain-ai/langchain/discussions/9059
2023-08-10 14:56:58 -04:00
Eugene Yurtsev
6c70f491ba ChatPromptTemplate pending deprecation proposal (#9004)
Pending deprecations for ChatPromptTemplate proposals
2023-08-10 14:40:55 -04:00
Bagatur
f3f5853e9f update api ref exampels (#9065)
manually update for now
2023-08-10 11:28:24 -07:00
TRY-ER
2431eca700 Agent vector store tool doc (#9029)
I was initially confused weather to use create_vectorstore_agent or
create_vectorstore_router_agent due to lack of documentation so I
created a simple documentation for each of the function about their
different usecase.
Replace this comment with:
- Description: Added the doc_strings in create_vectorstore_agent and
create_vectorstore_router_agent to point out the difference in their
usecase
  - Tag maintainer: @rlancemartin, @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-10 11:13:12 -07:00
Bagatur
641cb80c9d update pr temp (#9062) 2023-08-10 11:10:06 -07:00
Alvaro Bartolome
08a0741d82 Update ArgillaCallbackHandler as of latest argilla release (#9043)
Hi @agola11, or whoever is reviewing this PR 😄 

## What's in this PR?

As of the latest Argilla release, we'll change and refactor some things
to make some workflows easier, one of those is how everything's pushed
to Argilla, so that now there's no need to call `push_to_argilla` over a
`FeedbackDataset` when either `push_to_argilla` is called for the first
time, or `from_argilla` is called; among others.

We also add some class variables to make sure those are easy to update
in case we update those internally in the future, also to make the
`warnings.warn` message lighter from the code view.

P.S. Regarding the Twitter/X mention feel free to do so at either
https://twitter.com/argilla_io or https://twitter.com/alvarobartt, or
both if applicable, otherwise, just the first Twitter/X handle.
2023-08-10 10:59:46 -07:00
Blake (Yung Cher Ho)
8d351bfc20 Takeoff integration (#9045)
## Description:
This PR adds the Titan Takeoff Server to the available LLMs in
LangChain.

Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.

Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)

#### Testing
As Titan Takeoff runs locally on port 8000 by default, no network access
is needed. Responses are mocked for testing.

- [x] Make Lint
- [x] Make Format
- [x] Make Test

#### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.

Thanks for your help and please let me know if you have any questions.

cc: @hwchase17 @baskaryan
2023-08-10 10:56:06 -07:00
Nuno Campos
3bdc273ab3 Implement .transform() in RunnablePassthrough() (#9032)
- This ensures passthrough doesnt break streaming
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-10 10:41:19 -07:00
Bagatur
206f809366 fix sched ci (more) (#9056) 2023-08-10 10:39:29 -07:00
Aashish Saini
8a320e55a0 Corrected grammatical errors and spelling mistakes in the index.mdx file. (#9026)
Expressing gratitude to the creator for crafting this remarkable
application. 🙌, Would like to Enhance grammar and spelling in the
documentation for a polished reader experience.

Your feedback is valuable as always 

@baskaryan , @hwchase17 , @eyurtsev
2023-08-10 10:17:09 -07:00
Bagatur
e5db8a16c0 Bagatur/fix sched (#9054) 2023-08-10 09:34:44 -07:00
Bagatur
e162fd418a fix sched ci (#9053) 2023-08-10 09:29:46 -07:00
Ismail Pelaseyed
abb1264edf Fix issue with Metaphor Search Tool throwing error on missing keys in API response (#9051)
- Description: Fixes an issue with Metaphor Search Tool throwing when
missing keys in API response.
  - Issue: #9048 
  - Tag maintainer: @hinthornw @hwchase17 
  - Twitter handle: @pelaseyed
2023-08-10 09:07:00 -07:00
Eugene Yurtsev
5e05ba2140 Add embeddings cache (#8976)
This PR adds the ability to temporarily cache or persistently store
embeddings. 

A notebook has been included showing how to set up the cache and how to
use it with a vectorstore.
2023-08-10 11:15:30 -04:00
Bagatur
6e14f9548b bump 261 (#9041) 2023-08-10 07:59:27 -07:00
Lance Martin
2380492c8e API use case (#8546)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-10 07:52:54 -07:00
Eugene Yurtsev
d21333d710 Add redis storage (#8980)
Add a redis implementation of a BaseStore
2023-08-10 10:48:35 -04:00
Luca Foppiano
dfb93dd2b5 Improved grobid documentation (#9025)
- Description: Improvement in the Grobid loader documentation, typos and
suggesting to use the docker image instead of installing Grobid in local
(the documentation was also limited to Mac, while docker allow running
in any platform)
  - Tag maintainer: @rlancemartin, @eyurtsev
  - Twitter handle: @whitenoise
2023-08-10 10:47:22 -04:00
Hiroshige Umino
2c7297d243 Fix a broken code block display (#9034)
- Description: Fix a broken code block in this page:
https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/
- Issue: N/A
- Dependencies: None
- Tag maintainer: @baskaryan
- Twitter handle: yaotti
2023-08-10 10:39:01 -04:00
Bagatur
434a96415b make runnable dir (#9016)
Co-authored-by: Nuno Campos <nuno@boringbits.io>
2023-08-10 08:56:37 +01:00
Nuno Campos
c7a489ae0d Small improvements for tracer and debug output of runnables (#8683)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-10 07:24:12 +01:00
EricFan
618cf5241e Open file in UTF-8 encoding (#6919) (#8943)
FileCallbackHandler cannot handle some language, for example: Chinese. 
Open file using UTF-8 encoding can fix it.
@agola11
  
**Issue**: #6919 
**Dependencies**: NO dependencies,

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-09 17:54:21 -07:00
colegottdank
f4a47ec717 Add optional model kwargs to ChatAnthropic to allow overrides (#9013)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-09 17:34:00 -07:00
Piyush Jain
3b51817706 Updating port and ssl use in sample notebook (#8995)
## Description
This PR updates the sample notebook to use the default port (8182) and
the ssl for the Neptune database connection.
2023-08-09 17:08:48 -07:00
Kaizen
bbbd2b076f DirectoryLoader slicing (#8994)
DirectoryLoader can now return a random sample of files in a directory.
Parameters added are:
sample_size
randomize_sample
sample_seed


@rlancemartin, @eyurtsev

---------

Co-authored-by: Andrew Oseen <amovfx@protonmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-09 16:05:16 -07:00
IanRogers-101Ways
d248481f13 skip over empty google spreadsheets (#8974)
- Description: Allow GoogleDriveLoader to handle empty spreadsheets  
- Issue: Currently GoogleDriveLoader will crash if it tries to load a
spreadsheet with an empty sheet
  - Dependencies: n/a
  - Tag maintainer: @rlancemartin, @eyurtsev
2023-08-09 16:05:02 -07:00
Eugene Yurtsev
efa02ed768 Suppress divide by zero wranings for cosine similarity (#9006)
Suppress run time warnings for divide by zero as the downstream code
handles the scenario (handling inf and nan)
2023-08-09 15:56:51 -07:00
Leonid Ganeline
5454591b0a docstrings cleanup (#8993)
Added/Updated docstrings

 @baskaryan
2023-08-09 15:49:06 -07:00
Massimiliano Pronesti
c72da53c10 Add logprobs to SamplingParameters in vllm (#9010)
This PR aims at amending #8806 , that I opened a few days ago, adding
the extra `logprobs` parameter that I accidentally forgot
2023-08-09 15:48:29 -07:00
Bagatur
8dd071ad08 import airbyte loaders (#9009) 2023-08-09 14:51:15 -07:00
Bagatur
96d064e305 bump 260 (#9002) 2023-08-09 13:40:49 -07:00
Michael Shen
c2f46b2cdb Fixed wrong paper reference (#8970)
The ReAct reference references to MRKL paper. Corrected so that it
points to the actual ReAct paper #8964.
2023-08-09 16:17:46 -04:00
Nuno Campos
808248049d Implement a router for openai functions (#8589) 2023-08-09 21:17:04 +01:00
Eugene Yurtsev
a6e6e9bb86 Fix airbyte loader (#8998)
Fix airbyte loader

https://github.com/langchain-ai/langchain/issues/8996
2023-08-09 16:13:06 -04:00
William FH
90579021f8 Update Key Check (#8948)
In eval loop. It needn't be done unless you are creating the
corresponding evaluators
2023-08-09 12:33:00 -07:00
Jerzy Czopek
539672a7fd Feature/fix azureopenai model mappings (#8621)
This pull request aims to ensure that the `OpenAICallbackHandler` can
properly calculate the total cost for Azure OpenAI chat models. The
following changes have resolved this issue:

- The `model_name` has been added to the ChatResult llm_output. Without
this, the default values of `gpt-35-turbo` were applied. This was
causing the total cost for Azure OpenAI's GPT-4 to be significantly
inaccurate.
- A new parameter `model_version` has been added to `AzureChatOpenAI`.
Azure does not include the model version in the response. With the
addition of `model_name`, this is not a significant issue for GPT-4
models, but it's an issue for GPT-3.5-Turbo. Version 0301 (default) of
GPT-3.5-Turbo on Azure has a flat rate of 0.002 per 1k tokens for both
prompt and completion. However, version 0613 introduced a split in
pricing for prompt and completion tokens.
- The `OpenAICallbackHandler` implementation has been updated with the
proper model names, versions, and cost per 1k tokens.

Unit tests have been added to ensure the functionality works as
expected; the Azure ChatOpenAI notebook has been updated with examples.

Maintainers: @hwchase17, @baskaryan

Twitter handle: @jjczopek

---------

Co-authored-by: Jerzy Czopek <jerzy.czopek@avanade.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-09 10:56:15 -07:00
Bagatur
269f85b7b7 scheduled gha fix (#8977) 2023-08-09 09:44:25 -07:00
shibuiwilliam
3adb1e12ca make trajectory eval chain stricter and add unit tests (#8909)
- update trajectory eval logic to be stricter
- add tests to trajectory eval chain
2023-08-09 10:57:18 -04:00
Nuno Campos
b8df15cd64 Adds transform support for runnables (#8762)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-09 12:34:23 +01:00
Harrison Chase
4d72288487 async output parser (#8894)
Co-authored-by: Nuno Campos <nuno@boringbits.io>
2023-08-09 08:25:38 +01:00
Bagatur
3c6eccd701 bump 259 (#8951) 2023-08-09 00:07:47 -07:00
Harrison Chase
7de6a1b78e parent document retriever (#8941) 2023-08-08 22:39:08 -07:00
arjunbansal
a2681f950d add instructions on integrating Log10 (#8938)
- Description: Instruction for integration with Log10: an [open
source](https://github.com/log10-io/log10) proxiless LLM data management
and application development platform that lets you log, debug and tag
your Langchain calls
  - Tag maintainer: @baskaryan
  - Twitter handle: @log10io @coffeephoenix

Several examples showing the integration included
[here](https://github.com/log10-io/log10/tree/main/examples/logging) and
in the PR
2023-08-08 19:15:31 -07:00
Aarav Borthakur
3f64b8a761 Integrate Rockset as a chat history store (#8940)
Description: Adds Rockset as a chat history store
Dependencies: no changes
Tag maintainer: @hwchase17

This PR passes linting and testing. 

I added a test for the integration and an example notebook showing its
use.
2023-08-08 18:54:07 -07:00
Bagatur
0a1be1d501 document lcel fallbacks (#8942) 2023-08-08 18:49:33 -07:00
William FH
e3056340da Add id in error in tracer (#8944) 2023-08-08 18:25:27 -07:00
Molly Cantillon
99b5a7226c Weaviate: adding auth example + fixing spelling in ReadME (#8939)
Added basic auth example to Weaviate notebook @baskaryan
2023-08-08 16:24:17 -07:00
Bagatur
95cf7de112 scheduled tests GHA (#8879)
Adding scheduled daily GHA that runs marked integration tests. To start
just marking some tests in test_openai
2023-08-08 14:55:25 -07:00
Joe Reuter
8f0cd91d57 Airbyte based loaders (#8586)
This PR adds 8 new loaders:
* `AirbyteCDKLoader` This reader can wrap and run all python-based
Airbyte source connectors.
* Separate loaders for the most commonly used APIs:
  * `AirbyteGongLoader`
  * `AirbyteHubspotLoader`
  * `AirbyteSalesforceLoader`
  * `AirbyteShopifyLoader`
  * `AirbyteStripeLoader`
  * `AirbyteTypeformLoader`
  * `AirbyteZendeskSupportLoader`

## Documentation and getting started
I added the basic shape of the config to the notebooks. This increases
the maintenance effort a bit, but I think it's worth it to make sure
people can get started quickly with these important connectors. This is
also why I linked the spec and the documentation page in the readme as
these two contain all the information to configure a source correctly
(e.g. it won't suggest using oauth if that's avoidable even if the
connector supports it).

## Document generation
The "documents" produced by these loaders won't have a text part
(instead, all the record fields are put into the metadata). If a text is
required by the use case, the caller needs to do custom transformation
suitable for their use case.

## Incremental sync
All loaders support incremental syncs if the underlying streams support
it. By storing the `last_state` from the reader instance away and
passing it in when loading, it will only load updated records.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-08 14:49:25 -07:00
Eugene Yurtsev
15f650ae8c Add base storage interface, 2 implementations and utility encoder (#8895)
This PR defines an abstract interface for key value stores.

It provides 2 implementations: 
1. Local File System
2. In memory -- used to facilitate testing

It also provides an encoder utility to help take care of serialization
from arbitrary data to data that can be stored by the given store
2023-08-08 17:29:06 -04:00
Harrison Chase
7543a3d70e Harrison/image (#845)
Co-authored-by: Ashutosh Sanzgiri <sanzgiri@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-08 13:58:27 -07:00
Bagatur
ab193338aa bump 258 (#8932) 2023-08-08 12:54:51 -07:00
Eugene Yurtsev
bb12184551 Internal code deprecation API (#8763)
Proposal for an internal API to deprecate LangChain code.

This PR is heavily based on:
https://github.com/matplotlib/matplotlib/blob/main/lib/matplotlib/_api/deprecation.py

This PR only includes deprecation functionality (no renaming etc.). 
Additional functionality can be added on a need basis (e.g., renaming
parameters), but best to roll out as an MVP to test this
out.

DeprecationWarnings are ignored by default. We can change the policy for
the deprecation warnings, but we'll need to make sure we're not creating
noise for users due to internal code invoking deprecated functionality.
2023-08-08 15:42:22 -04:00
Leonid Ganeline
33a2f58fbf tensoflow_datasets document loader (#8721)
This PR adds `tensoflow_datasets` document loader
2023-08-08 15:19:28 -04:00
Holt Skinner
fad26e79a3 fix: Resolve AttributeError in Google Cloud Enterprise Search retriever (#8872)
- Reverting some of the changes made in
https://github.com/langchain-ai/langchain/pull/8369
2023-08-08 12:11:12 -07:00
William FH
b2eb4ff0fc Relax Validation in Eval (#8902)
Just check for missing keys
2023-08-08 11:59:30 -07:00
Leonid Ganeline
2d078c7767 PubMed document loader (#8893)
- added `PubMed Document Loader` artifacts; ut-s; examples 
- fixed `PubMed utility`; ut-s

@hwchase17
2023-08-08 14:26:03 -04:00
Ofer Mendelevitch
a7824f16f2 Added consistent timeout for Vectara calls (#8892)
- Description: consistent timeout at 60s for all calls to Vectara API
- Tag maintainer: @rlancemartin, @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-08 11:10:32 -07:00
Bagatur
642b57c7ff nit (#8927) 2023-08-08 10:54:25 -07:00
manmax31
4a07fba9f0 Improve query prompt of BGE embeddings (#8908)
Replace this comment with:
- Description: Improved query of BGE embeddings after talking with the
devs of BGE embeddings ,
  - Dependencies: any dependencies required for this change,
  - Tag maintainer: @hwchase17 ,
  - Twitter handle: @ManabChetia3

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2023-08-08 10:20:37 -07:00
Jeremy W
c5c0735fc4 Remove Evaluation from Modules page (#8926)
Remove Evaluation link (which gives 404 now) from Modules page, since it
lives under Guides page now
2023-08-08 10:20:24 -07:00
Seif
6327eecdaf Fix typo in Vectara docs (#8925)
Fixed a typo in the Vectara docs description.
2023-08-08 10:11:07 -07:00
Chris Pappalardo
beab637f04 added filter kwarg to VectorStoreIndexWrapper query and query_with_so… (#8844)
- Description: added filter to query methods in VectorStoreIndexWrapper
for filtering by metadata (i.e. search_kwargs)
- Tag maintainer: @rlancemartin, @eyurtsev

Updated the doc snippet on this topic as well. It took me a long while
to figure out how to filter the vectorstore by filename, so this might
help someone else out.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-08 10:10:45 -07:00
Apurv Agarwal
4a63533216 addition to docs at 'Store and reference chat history' (#8910)
- Description: I have added an example showing how to pass a custom
template to ConversationRetrievalChain. Instead of
CONDENSE_QUESTION_PROMPT we can pass any prompt in the argument
condense_question_prompt. Look in Use cases -> QA over Documents -> How
to -> Store and reference chat history,
  - Issue: #8864,
  - Dependencies: NA,
  - Tag maintainer: @hinthornw,
  - Twitter handle:

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-08 10:10:11 -07:00
David vonThenen
bf4a112aa6 Fixes to the Nebula LLM Integration (#8918)
This addresses some issues with introducing the Nebula LLM to LangChain
in this PR:
https://github.com/langchain-ai/langchain/pull/8876

This fixes the following:
- Removes `SYMBLAI` from variable names
- Fixes bug with `Bearer` for the API KEY


Thanks again in advance for your help!
cc: @hwchase17, @baskaryan

---------

Co-authored-by: dvonthenen <david.vonthenen@gmail.com>
2023-08-08 10:04:43 -07:00
Jacob Lee
d1e305028f Automatically set docs appearance to system default (#8924)
@baskaryan
2023-08-08 09:54:18 -07:00
Marie-Philippe Gill
6b9f266837 Add user_context to AmazonKendraRetriever (#8869)
### Description 

Now, we can pass information like a JWT token using user_context:  

```python
self.retriever = AmazonKendraRetriever(index_id=kendraIndexId, user_context={"Token": jwt_token})
```

- [x] `make lint`
- [x] `make format`
- [x] `make test`

Also tested by pip installing in my own project, and it allows access
through the token.

### Maintainers 

 @rlancemartin, @eyurtsev

### My twitter handle 

[girlknowstech](https://twitter.com/girlknowstech)
2023-08-08 08:37:03 -07:00
Josh Hart
6116cbf0de Fix imports in awslambda docs (#8916)
Minor doc fix to awslambda tool notebook. 

Add missing import for initialize_agent to awslambda agent example

Co-authored-by: Josh Hart <josharj@amazon.com>
2023-08-08 08:29:28 -07:00
GitHub-L
67718c1d6b Update OpenAPI code to fetch use the requestBody
- Description: The API doc passed to LLM only included the content of
responses but did not include the content of requestBody, causing the
agent to be unable to construct the correct request parameters based on
the requestBody information. Add two lines of code fixed the bug,
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
  - Tag maintainer: @hinthornw ,
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
2023-08-08 10:33:21 -04:00
Maurits de Groot
61c2d918c6 Fixed inaccurate import in integrations:providers:bedrock documentation (#8915)
Description:
Fixed inaccurate import in integrations:providers:bedrock documentation

In the current version of the bedrock documentation, page
https://python.langchain.com/docs/integrations/providers/bedrock it
states that the import is from langchain import Bedrock

This has been changed to from langchain.llms.bedrock import Bedrock as
stated in https://python.langchain.com/docs/integrations/llms/bedrock

Issue:
Not applicable

Dependencies
No dependencies required

Tag maintainer
@baskaryan

Twitter handle:
Not applicable
2023-08-08 07:24:36 -07:00
Leonid Kuligin
52d6b91c18 Fixed a source for documents uploaded from GCS (#8912)
Sets source for documents uploaded from GCS to source on gcs
#8911

Co-authored-by: Leonid Kuligin <kuligin@google.com>
2023-08-08 09:34:43 -04:00
Manuel Soria
e74a605379 SQL use case docs (#8513) 2023-08-08 03:30:18 -07:00
Bagatur
022ef170f8 bump 257 (#8903) 2023-08-08 01:16:33 -07:00
Jacob Lee
fa30a57034 Adds Ollama as an LLM (#8829)
Adds Ollama as an LLM. Ollama can run various open source models locally
e.g. Llama 2 and Vicuna, automatically configuring and GPU-optimizing
them.

@rlancemartin @hwchase17

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
2023-08-07 21:19:22 -07:00
Ash Vardanian
1f9124ceaa Add: USearch Vector Store (#8835)
## Description

I am excited to propose an integration with USearch, a lightweight
vector-search engine available for both Python and JavaScript, among
other languages.

## Dependencies

It introduces a new PyPi dependency - `usearch`. I am unsure if it must
be added to the Poetry file, as this would make the PR too clunky.
Please let me know.

## Profiles

- Maintainers: @ashvardanian @davvard
- Twitter handles: @ashvardanian @unum_cloud

---------

Co-authored-by: Davit Vardanyan <78792753+davvard@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-07 20:41:00 -07:00
Leonid Kuligin
b52a3785c9 Allow to specify a custom loader for GcsFileLoader (#8868)
Co-authored-by: Leonid Kuligin <kuligin@google.com>
2023-08-07 22:57:31 -04:00
Jeffrey Wang
ff44fe4e16 Change default Metaphor search example to use prompt optimizer (#8890)
- fix install command
- change example notebook to use Metaphor autoprompt by default

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-07 17:25:36 -07:00
Bruno Bornsztein
d56eff042a Make json output parser handle newlines inside markdown code blocks (#8682)
Update to #8528

Newlines and other special characters within markdown code blocks
returned as `action_input` should be handled correctly (in particular,
unescaped `"` => `\"` and `\n` => `\\n`) so they don't break JSON
parsing.

@baskaryan
2023-08-07 15:49:54 -07:00
Jeffrey Wang
ce3666c28b Fix metaphor install command in guide (#8888) 2023-08-07 15:43:47 -07:00
Oege Dijk
cff52638b2 when encountering error during fetch return "" in web_base.py (#8753)
when e.g. downloading a sitemap with a malformed url (e.g.
"ttp://example.com/index.html" with the h omitted at the beginning of
the url), this will ensure that the sitemap download does not crash, but
just emits a warning. (maybe should be optional with e.g. a
`skip_faulty_urls:bool=True` parameter, but this was the most
straightforward fix)

@rlancemartin, @eyurtsev
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-07 15:35:41 -07:00
Harrison Chase
bbd22b9b76 update metaphor docs (#8886) 2023-08-07 14:44:41 -07:00
Bennji94
33cdb06b5c Async RetryOutputParser, RetryWithErrorOutputParser and OutputFixingParser (#8776)
Added async parsing functions for RetryOutputParser,
RetryWithErrorOutputParser and OutputFixingParser.

The async parse functions call the arun methods of the used LLMChains.

Fix for #7989

---------

Co-authored-by: Benjamin May <benjamin.may94@gmail.com>
2023-08-07 14:42:48 -07:00
Carson
cc908d49a3 Fixes typo in documentation (#8882)
Fixes a simple typo in the google search engine tool documentation
@baskaryan
2023-08-07 14:33:21 -07:00
Joshua Sundance Bailey
7fc07ba5df Create ChatAnyscale (#8770)
- Description: Adds the ChatAnyscale class with llama-2 7b, llama-2 13b,
and llama-2 70b on [Anyscale
Endpoints](https://app.endpoints.anyscale.com/)
- It inherits from ChatOpenAI and requires openai (probably unnecessary
but it made for a quick and easy implementation)
- Inspired by https://github.com/langchain-ai/langchain/pull/8434
(@kylehh and @baskaryan )
2023-08-07 13:21:05 -07:00
idcore
fe78aff1f2 Add new parameter forced_decoder_ids to OpenAIWhisperParserLocal + small bug fix (#8793)
- Description: new parameter forced_decoder_ids for
OpenAIWhisperParserLocal to force input language, and enable optional
translate mode. Usage example:
processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
forced_decoder_ids = processor.get_decoder_prompt_ids(language="french",
task="transcribe")
#forced_decoder_ids =
processor.get_decoder_prompt_ids(language="french", task="translate")
loader = GenericLoader(YoutubeAudioLoader(urls, save_dir),
OpenAIWhisperParserLocal(lang_model="openai/whisper-medium",forced_decoder_ids=forced_decoder_ids))
  - Issue #8792
  - Tag maintainer: @rlancemartin, @eyurtsev

---------

Co-authored-by: idcore <eugene.novozhilov@gmail.com>
2023-08-07 13:17:58 -07:00
David vonThenen
40079d4936 Introduce Nebula LLM to LangChain (#8876)
## Description

This PR adds Nebula to the available LLMs in LangChain.

Nebula is an LLM focused on conversation understanding and enables users
to extract conversation insights from video, audio, text, and chat-based
conversations. These conversations can occur between any mix of human or
AI participants.

Examples of some questions you could ask Nebula from a given
conversation are:
- What could be the customer’s pain points based on the conversation?
- What sales opportunities can be identified from this conversation?
- What best practices can be derived from this conversation for future
customer interactions?

You can read more about Nebula here:

https://symbl.ai/blog/extract-insights-symbl-ai-generative-ai-recall-ai-meetings/

#### Integration Test 

An integration test is added, but it requires network access. Since
Nebula is fully managed like OpenAI, network access is required to
exercise the integration test.

#### Linting

- [x] make lint
- [x] make test (TODO: there seems to be a failure in another
non-related test??? Need to check on this.)
- [x] make format

### Dependencies

No new dependencies were introduced.

### Twitter handle

[@symbldotai](https://twitter.com/symbldotai)
[@dvonthenen](https://twitter.com/dvonthenen)


If you have any questions, please let me know.

cc: @hwchase17, @baskaryan

---------

Co-authored-by: dvonthenen <david.vonthenen@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-07 13:15:26 -07:00
Lance Martin
84c1ad7eaa Fix colab link for extraction ntbk (#8878)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-07 11:36:46 -07:00
Nuno Campos
9892e95d03 Add flush=True to stream examples (#8862) 2023-08-07 14:33:17 -04:00
Eugene Yurtsev
f616aee35a JsonOutputFunctionParser: Fix mutation in place bug (#8758)
Fixes mutation in place in the JsonOutputFunctionParser. This causes
issues when trying to re-use the original AI message.
2023-08-07 14:32:46 -04:00
shibuiwilliam
ab47557db3 fix evaluation parse test (#8859)
# What
- fix evaluation parse test

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: Fix evaluation parse test
  - Issue: None
  - Dependencies: None
  - Tag maintainer: @baskaryan
  - Twitter handle: @MLOpsJ

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-07 11:15:41 -07:00
manmax31
40096c73cd Add BGE embeddings support (#8848)
- Description: [BGE-large](https://huggingface.co/BAAI/bge-large-en)
embeddings from BAAI are at the top of [MTEB
leaderboard](https://huggingface.co/spaces/mteb/leaderboard). Hence
adding support for it.
- Tag maintainer: @baskaryan
- Twitter handle: @ManabChetia3

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-07 11:15:30 -07:00
shibuiwilliam
fbc83dfdbb Fix/abstract add message (#8856)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: Fix/abstract add message
  - Issue: None
  - Dependencies: None
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
  - Twitter handle: @MLOpsJ

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-07 11:02:19 -07:00
William FH
91be7eee66 Add concurrency support for run_on_dataset (#8841)
Long-term, would be better to use the lower-level batch() method(s) but
it may take me a bit longer to clean up. This unblocks in the meantime,
though it may fail when the evaluated chain raises a
`NotImplementedError` for a corresponding async method
2023-08-07 09:24:48 -07:00
Bagatur
fc2f450f2d bump 256 (#8870) 2023-08-07 08:29:02 -07:00
Tudor Golubenco
aeaef8f3a3 Add support for Xata as a vector store (#8822)
This adds support for [Xata](https://xata.io) (data platform based on
Postgres) as a vector store. We have recently added [Xata to
Langchain.js](https://github.com/hwchase17/langchainjs/pull/2125) and
would love to have the equivalent in the Python project as well.

The PR includes integration tests and a Jupyter notebook as docs. Please
let me know if anything else would be needed or helpful.

I have added the xata python SDK as an optional dependency.

## To run the integration tests

You will need to create a DB in xata (see the docs), then run something
like:

```
OPENAI_API_KEY=sk-... XATA_API_KEY=xau_... XATA_DB_URL='https://....xata.sh/db/langchain'  poetry run pytest tests/integration_tests/vectorstores/test_xata.py
```

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Philip Krauss <35487337+philkra@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-07 08:14:52 -07:00
Harrison Chase
472f00ada7 add moderation example (#8718) 2023-08-07 07:50:11 -07:00
Leonid Kuligin
6e3fa59073 Added chat history to codey models (#8831)
#7469

since 1.29.0, Vertex SDK supports a chat history provided to a codey
chat model.

Co-authored-by: Leonid Kuligin <kuligin@google.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-07 07:34:35 -07:00
Massimiliano Pronesti
a616e19975 feat(llms): add support for vLLM (#8806)
Hello langchain maintainers, 
this PR aims at integrating
[vllm](https://vllm.readthedocs.io/en/latest/#) into langchain. This PR
closes #8729.

This feature clearly depends on `vllm`, but I've seen other models
supported here depend on packages that are not included in the
pyproject.toml (e.g. `gpt4all`, `text-generation`) so I thought it was
the case for this as well.

@hwchase17, @baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-07 07:32:02 -07:00
Bagatur
100d9ce4c7 bump 255 (#8865) 2023-08-07 07:25:23 -07:00
Vic Cao
c9da300e4d fix: overwrite stream for ChatOpenAI in runtime (#8288)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
@hwchase17, @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
2023-08-07 10:18:30 +01:00
Karthik Raja A
5a9765b1b5 MultiOn client toolkit update 2.0 (#8750)
- Updated to use newer better function interaction
 - Previous version had only one callback
 - @hinthornw @hwchase17  Can you look into this
 -  Shout out to @MultiON_AI @DivGarg9 on twitter

---------

Co-authored-by: Naman Garg <ngarg3@binghamton.edu>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-06 22:24:10 -07:00
Emre
454998c1fb Fix invalid escape sequence warnings (#8771)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

Description: The lines I have changed looks like incorrectly escaped for
regex. In python 3.11, I receive DeprecationWarning for these lines.
You don't see any warnings unless you explicitly run python with `-W
always::DeprecationWarning` flag. So, this is my attempt to fix it.

Here are the warnings from log files:

```
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:919: DeprecationWarning: invalid escape sequence '\s'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:918: DeprecationWarning: invalid escape sequence '\s'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:917: DeprecationWarning: invalid escape sequence '\s'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:916: DeprecationWarning: invalid escape sequence '\c'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:903: DeprecationWarning: invalid escape sequence '\*'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:804: DeprecationWarning: invalid escape sequence '\*'
/usr/local/lib/python3.11/site-packages/langchain/text_splitter.py:804: DeprecationWarning: invalid escape sequence '\*'
```

cc @baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-06 17:01:18 -07:00
Harrison Chase
0adc282d70 Harrison/as retriever docstring (#8840)
Co-authored-by: Bytestorm <31070777+Bytestorm5@users.noreply.github.com>
2023-08-06 17:00:57 -07:00
Zend
bd4865b6fe Async Recursive URL loader (#8502)
Description: This PR improves the function of recursive_url_loader, such
as limiting the depth of the access, and customizable extractors(from
the raw webpage to the text of the Document object), so that users can
use other tools to extract the webpage. This PR also includes the
document and test for the new loader.
Old PR closed due to project structure change. #7756

Because socket requests are not allowed, the old unit test was removed.
Issue: N/A
Dependencies: asyncio, aiohttp
Tag maintainer: @rlancemartin
Twitter handle: @ Zend_Nihility

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
2023-08-06 16:22:31 -07:00
fqassemi
485d716c21 Feature faiss delete (#8135)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
- Description: docstore had two main method: add and search, however,
dealing with docstore sometimes requires deleting an entry from
docstore. So I have added a simple delete method that deletes items from
docstore. Additionally, I have added the delete method to faiss
vectorstore for the very same reason.
  - Issue: NA
  - Dependencies: NA
  - Tag maintainer:  @rlancemartin, @eyurtsev
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-06 15:46:30 -07:00
Nicolas
b57fa1a39c docs: Improvements on Mendable Search (#8808)
- Balancing prioritization between keyword / AI search
- Show snippets of highlighted keywords when searching 
- Improved keyword search
- Fixed bugs and issues

Shoutout to @calebpeffer for implementing and gathering feedback on it 

cc: @dev2049 @rlancemartin @hwchase17
2023-08-06 15:32:06 -07:00
Ikko Eltociear Ashimine
6b93670410 Fix typo in long_context_reorder.ipynb (#8811)
begining -> beginning

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-06 15:31:38 -07:00
Harrison Chase
2bb1d256f3 add example of memory and returning retrieved docs (#8830) 2023-08-06 15:25:12 -07:00
Pierre Alexandre SCHEMBRI
4a7ebb7184 Fix issue #7616 (#7617)
Fix Issue #7616 with a simpler approach to extract function names (use
`__name__` attribute)

@hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-06 15:12:03 -07:00
Ankur Agarwal
797c9e92c8 #8786 Fixed: Callback handler disconnect in between (#8787)
Fixes for  #8786 @agola11 

- Description: The flow of callback is breaking till the last chain, as
callbacks are missed in between chain along nested path. This will help
get full trace and correlate parent child relationship in all nested
chains.

  - Issue: the issue #8786 
  - Dependencies: NA
  - Tag maintainer: @agola11 
  - Twitter handle: Agarwal_Ankur
2023-08-06 15:11:45 -07:00
Kshitij Wadhwa
5f1aab5487 Fix docs for Rockset (#8807)
* remove error output for notebook
* add comment about vector length for ingest transformation
* change OPENAI_KEY -> OPENAI_API_KEY

cc @baskaryan
2023-08-06 15:04:01 -07:00
William FH
983678dedc Add Dist Metrics for String Distance Evaluation (#8837)
Co-authored-by: shibuiwilliam <shibuiyusuke@gmail.com>
2023-08-06 14:05:00 -07:00
William FH
f76d50d8dc fix exception inconsistencies (#8812) (#8839)
Merge #8812 with main to fix unrelated test failure

Co-authored-by: shibuiwilliam <shibuiyusuke@gmail.com>
2023-08-06 14:04:49 -07:00
Bagatur
15c271e7b3 bump 254 (#8834) 2023-08-06 11:34:54 -07:00
Bagatur
d7b613a293 Bagatur/revert revert nuclia (#8833) 2023-08-06 11:24:36 -07:00
Bagatur
2f309a4ce6 Revert "Bagatur/nuclia (#8404)" (#8832) 2023-08-06 11:14:01 -07:00
Paul Hager
2111ed3c75 Improving the text of the invalid tool to list the available tools. (#8767)
Description: When using a ReAct Agent with tools and no tool is found,
the InvalidTool gets called. Previously it just asked for a different
action, but I've found that if you list the available actions it
improves the chances of getting a valid action in the next round. I've
added a UnitTest for it also.

@hinthornw
2023-08-05 18:09:32 -07:00
shibuiwilliam
d9bc46186d Add missing test for retrievers self_query (#8783)
# What
- Add missing test for retrievers self_query
- Add missing import validation

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: Add missing test for retrievers self_query
  - Issue: None
  - Dependencies: None
  - Tag maintainer: @rlancemartin, @eyurtsev
  - Twitter handle: @MlopsJ
  
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-05 17:31:41 -07:00
Snehil Kumar
1bd4890506 Update links on QA Use Case docs (#8784)
- Description: 2 links were not working on Question Answering Use Cases
documentation page. Hence, changed them to nearest useful links,
  - Issue: NA,
  - Dependencies: NA,
  - Tag maintainer: @baskaryan,
  - Twitter handle: NA

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-05 17:30:56 -07:00
Wilson Leao Neto
b0d0338f21 feat: expose Kendra result item id and document id as document metadata (#8796)
- Description: we expose Kendra result item id and document id as
document metadata.
  - Tag maintainer: @3coins @baskaryan 
  - Twitter handle: wilsonleao

**Why**
The result item id and document id might be used to keep track of the
retrieved resources.
2023-08-05 17:21:24 -07:00
Bal Narendra Sapa
a22d502248 added the embeddings part (#8805)
Description: forgot to add the embeddings part in the documentation.
sorry 😅

@baskaryan
2023-08-05 17:16:33 -07:00
Bagatur
9b86235a56 bump 253 (#8798) 2023-08-05 10:57:22 -07:00
Bagatur
9fc9018951 Bagatur/nuclia (#8404)
Co-authored-by: Eric BREHAULT <ebrehault@gmail.com>
2023-08-05 10:44:43 -07:00
Francisco Ingham
ef5bc1fef1 Refactor for extraction docs (#8465)
Refactor for the extraction use case documentation

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Lance Martin <lance@langchain.dev>
2023-08-05 10:09:14 -07:00
William FH
1d68470bac Same Project for Eval Runs (#8781) 2023-08-04 17:51:49 -07:00
William FH
c8f3615aa6 Support evaluating runnables and arbitrary functions (#8698)
Added a couple of "integration tests" for these that I ran.

Main design point of feedback: at this point, would it just be better to
have separate arguments for each type? Little confusing what is or isn't
supported and what is the intended usage at this point since I try to
wrap the function as runnable or pack or unpack chains/llms.

```
run_on_dataset(
...
llm_or_chain_factory = None,
llm = None,
chain = NOne,
runnable=None,
function=None
):
# raise error if none set
```

Downside with runnables and arbitrary function support is that you get
much less helpful validation and error messages, but I don't think we
should block you from this, at least.
2023-08-04 16:39:04 -07:00
liguoqinjim
d00a247da7 fix:get bilibili subtitles (#8165)
- Description: fix the Loader 'BiliBiliLoader'
  - Issue: the API response was changed

![image](https://github.com/langchain-ai/langchain/assets/2113954/91216793-82f8-4c82-a018-d49f36f5f6aa)
The previously used API no longer returns the "subtitle_url" property.

![image](https://github.com/langchain-ai/langchain/assets/2113954/a8ec2a7a-f40d-4c2a-b7d0-0ccdf2b327cc)
We should use another API to get `subtitle_url` property. 
The `subtitle_url` returned by this API does not include the http schema
and needs to be added.

  - Dependencies: Nope
  - Tag maintainer: @rlancemartin
2023-08-04 14:30:41 -07:00
Bagatur
21771a6f1c rm sklearn links (#8773) 2023-08-04 14:28:00 -07:00
Joshua Carroll
e5fed7d535 Extend the StreamlitChatMessageHistory docs with a fuller example and… (#8774)
Add more details to the [notebook for
StreamlitChatMessageHistory](https://python.langchain.com/docs/integrations/memory/streamlit_chat_message_history),
including a link to a [running example
app](https://langchain-st-memory.streamlit.app/).

Original PR: https://github.com/langchain-ai/langchain/pull/8497
2023-08-04 14:27:46 -07:00
Eugene Yurtsev
19dfe166c9 Update documentation for prompts (#8381)
* Documentation to favor creation without declaring input_variables
* Cut out obvious examples, but add more description in a few places

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2023-08-04 14:25:03 -07:00
Dayou Liu
91a0817e39 docs: llamacpp minor fixes (#8738)
- Description: minor updates on llama cpp doc
2023-08-04 14:19:43 -07:00
Bagatur
f437311eef Bagatur/runnable with fallbacks (#8543) 2023-08-04 14:06:05 -07:00
Eugene Yurtsev
003e1ca9a0 Update api references (#8646)
Update API reference documentation. This PR will pick up a number of missing classes, it also applies selective formatting based on the class / object type.
2023-08-04 16:10:58 -04:00
Piyush Jain
8374367de2 Amazon Textract as document loader (#8661)
Description: Adding support for [Amazon
Textract](https://aws.amazon.com/textract/) as a PDF document loader

---------

Co-authored-by: schadem <45048633+schadem@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-04 15:55:06 -04:00
Leonid Ganeline
82ef1f587d fix makefile help (#8723)
Fixed the `makefile` help. It was not up-to-date.
 @baskaryan
2023-08-04 15:37:00 -04:00
Neil Murphy
b0d0399d34 (issue #5163) Append reminder to nest multi-prompt router prompt output in JSON markdown code block, resolving JSON parsing error. (#8709)
Resolves occasional JSON parsing error when some predictions are passed
through a `MultiPromptChain`.

Makes [this
modification](https://github.com/langchain-ai/langchain/issues/5163#issuecomment-1652220401)
to `multi_prompt_prompt.py`, which is much cleaner than appending an
entire example object, which is another community-reported solution.

@hwchase17, @baskaryan

cc: @SimasJan
2023-08-04 15:36:34 -04:00
Snehil Kumar
a6ee646ef3 Update get_started.mdx (#8744)
- Description: Added a missing word and rearranged a sentence in the
documentation of Self Query Retrievers.,
  - Issue: NA,
  - Dependencies: NA,
  - Tag maintainer: @baskaryan,
  - Twitter handle: NA

Thanks for your time.
2023-08-04 15:32:19 -04:00
Bal Narendra Sapa
bd61757423 add documentation for serializer function (#8769)
Description: Added necessary documentation for serializer functions

@baskaryan
2023-08-04 14:39:40 -04:00
rjanardhan3
affaaea87b Updates fireworks (#8765)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: Updates to Fireworks Documentation, 
  - Issue: N/A,
  - Dependencies: N/A,
  - Tag maintainer: @rlancemartin,

---------

Co-authored-by: Raj Janardhan <rajjanardhan@Rajs-Laptop.attlocal.net>
2023-08-04 10:32:22 -07:00
Bagatur
8c35fcb571 update rss doc (#8761) 2023-08-04 08:25:20 -07:00
Bagatur
e45be8b3f6 bump 252 (#8759) 2023-08-04 08:22:16 -07:00
Bagatur
0d5a90f30a Revert "add filter to sklearn vector store functions (#8113)" (#8760) 2023-08-04 08:13:32 -07:00
Ben Auffarth
6b007e2829 update repo username to langchain-ai (#8747)
Time for this minor update? @hwchase17
2023-08-04 07:31:39 -07:00
Lance Martin
be638ad77d Chatbots use case (#8554)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-04 07:02:14 -07:00
Bagatur
115a77142a support for arbitrary kwargs for llamacpp (#8727)
llamacpp params (per their own code) are unstable, so instead of
adding/deleting them constantly adding a model_kwargs parameter that
allows for arbitrary additional kwargs

cc @jsjolund and @zacps re #8599 and #8704
2023-08-04 06:52:02 -07:00
Alec Flett
f0b0c72d98 add load() deserializer function that bypasses need for json serialization (#7626)
There is already a `loads()` function which takes a JSON string and
loads it using the Reviver

But in the callbacks system, there is a `serialized` object that is
passed in and that object is already a deserialized JSON-compatible
object. This allows you to call `load(serialized)` and bypass
intermediate JSON encoding.

I found one other place in the code that benefited from this
short-circuiting (string_run_evaluator.py) so I fixed that too.

Tagging @baskaryan for general/utility stuff.

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
2023-08-04 09:49:41 +01:00
Ruiqi Guo
6aee589eec Add ScaNN support in vectorstore. (#8251)
Description: Add ScaNN vectorstore to langchain.
ScaNN is a Open Source, high performance vector similarity library
optimized for AVX2-enabled CPUs.
https://github.com/google-research/google-research/tree/master/scann

- Dependencies: scann

Python notebook to illustrate the usage:
docs/extras/integrations/vectorstores/scann.ipynb
Integration test:
libs/langchain/tests/integration_tests/vectorstores/test_scann.py

@rlancemartin, @eyurtsev for review.

Thanks!
2023-08-03 23:41:30 -07:00
Moonsik Kang
5b7ff215e8 Fix load map reduce documents chain (#7915)
This PR updates _load_reduce_documents_chain to handle
`reduce_documents_chain` and `combine_documents_chain` config

Please review @hwchase17, @baskaryan

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-03 23:27:38 -07:00
shibuiwilliam
0f0ccfe7f6 add filter to sklearn vector store functions (#8113)
# What
- This is to add filter option to sklearn vectore store functions

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: Add filter to sklearn vectore store functions.
  - Issue: None
  - Dependencies: None
  - Tag maintainer: @rlancemartin, @eyurtsev
  - Twitter handle: @MlopsJ

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-03 23:06:41 -07:00
shibuiwilliam
2759e2d857 add save and load tfidf vectorizer and docs for TFIDFRetriever (#8112)
This is to add save_local and load_local to tfidf_vectorizer and docs in
tfidf_retriever to make the vectorizer reusable.

<!-- Thank you for contributing to LangChain!

Replace this comment with:
- Description: add save_local and load_local to tfidf_vectorizer and
docs in tfidf_retriever
  - Issue: None
  - Dependencies: None
  - Tag maintainer: @rlancemartin, @eyurtsev
  - Twitter handle: @MlopsJ

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-03 23:06:27 -07:00
aerickson-clt
0f68054401 Issue #8089 Improve painless script scoring with params.query_value. (#8086)
This is a minor improvement that replaces the full query_vector with the
reference string `params.query_value` used in the painless scripting
docs. I have tested it manually and it works on an example. This makes
the query about half the size and much easier to read.


https://opensearch.org/docs/latest/search-plugins/knn/painless-functions/#get-started-with-k-nns-painless-scripting-functions

@babbldev 
#8089

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-03 23:06:17 -07:00
linpan
0ead8ea708 typo: ignored to ignore (#8740)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-03 23:05:59 -07:00
aerickson-clt
c7ea6e9ff8 Issue 8081 Fix query results size bug. Other bug: pass vector_field param. (#8085)
@baskaryan
#8081 

Likely the reason why the issue occurred is that OpenSearch's default k
is 10, so it needs to be specified.

Here's a similar question about its cousin ElasticSearch

https://discuss.elastic.co/t/elasticsearch-returns-only-10-records-but-the-hit-is-507/136605

I tested this manually and also fixed the same issue in
`_default_painless_scripting_query`. In addition,
`_default_painless_scripting_query` was not passing the `vector_field`
name to a sub call, so I fixed that too.


![image](https://github.com/hwchase17/langchain/assets/32244272/cfb7aad1-f701-49d9-9beb-a723aa276817)

I also tested this in the aws opensearch developer tools.


![image](https://github.com/hwchase17/langchain/assets/32244272/24544682-1578-4bbb-9eb5-980463c5b41b)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-03 22:41:11 -07:00
Sidchat95
812419d946 Removing score threshold parameter of faiss _similarity_search_with_r… (#8093)
Removing score threshold parameter of faiss
_similarity_search_with_relevance_scores as the thresholding part is
implemented in similarity_search_with_relevance_scores method which
calls this method.

As this method is supposed to be a private method of faiss.py this will
never receive the score threshold parameter as it is popped in the super
method similarity_search_with_relevance_scores.

@baskaryan @hwchase17
2023-08-03 21:31:43 -07:00
Mathias Panzenböck
873a80e496 Reduce generation of temporary objects (#7950)
Just a tiny change to use `list.append(...)` and `list.extend(...)`
instead of `list += [...]` so that no unnecessary temporary lists are
created.

Since its a tiny miscellaneous thing I guess @baskaryan is the
maintainer to tag?

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-03 21:24:08 -07:00
Lance Martin
d1b95db874 Retriever that can re-phase user inputs (#8026)
Simple retriever that applies an LLM between the user input and the
query pass the to retriever.

It can be used to pre-process the user input in any way.

The default prompt:

```
DEFAULT_QUERY_PROMPT = PromptTemplate(
    input_variables=["question"],
    template="""You are an assistant tasked with taking a natural languge query from a user
    and converting it into a query for a vectorstore. In this process, you strip out
    information that is not relevant for the retrieval task. Here is the user query: {question} """
)
```

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-03 21:23:59 -07:00
Harrison Chase
6c3573e7f6 Harrison/aleph alpha (#8735)
Co-authored-by: PiotrMazurek <piotr.mazurek@aleph-alpha.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-03 21:21:15 -07:00
Wilson Leao Neto
179a39954d Provides access to a Document page_content formatter in the AmazonKendraRetriever (#8034)
- Description: 
- Provides a new attribute in the AmazonKendraRetriever which processes
a ResultItem and returns a string that will be used as page_content;
- The excerpt metadata should not be changed, it will be kept as was
retrieved. But it is cleaned when composing the page_content;
    - Refactors the AmazonKendraRetriever to improve code reusability;
- Issue: #7787 
- Tag maintainer: @3coins @baskaryan
- Twitter handle: wilsonleao

**Why?**

Some use cases need to adjust the page_content by dynamically combining
the ResultItem attributes depending on the context of the item.
2023-08-03 20:54:49 -07:00
Ilya
6f0bccfeb5 Add regex control over separators in character text splitter (#7933)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
#7854

Added the ability to use the `separator` ase a regex or a simple
character.
Fixed a bug where `start_index` was incorrectly counting from -1.

Who can review?
@eyurtsev
@hwchase17 
@mmz-001
2023-08-03 20:25:23 -07:00
Vasileios Mansolas
e68a1d73d0 Fix Issue #6650: Enable Azure Active Directory token-based auth access for AzureChatOpenAI (#8622)
When using AzureChatOpenAI the openai_api_type defaults to "azure". The
utils' get_from_dict_or_env() function triggered by the root validator
does not look for user provided values from environment variables
OPENAI_API_TYPE, so other values like "azure_ad" are replaced with
"azure". This does not allow the use of token-based auth.

By removing the "default" value, this allows environment variables to be
pulled at runtime for the openai_api_type and thus enables the other
api_types which are expected to work.

This fixes #6650

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-03 20:21:41 -07:00
Ofer Mendelevitch
29f51055e8 Updates to Vectara documentation (#8699)
- Description: updates to Vectara documentation with more details on how
to get started.
- Issue: NA
- Dependencies: NA
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @vectara, @ofermend

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-03 20:21:17 -07:00
Alec Flett
5d765408ce propagate callbacks through load_summarize_chain (#7565)
This lets you pass callbacks when you create the summarize chain:

```
summarize = load_summarize_chain(llm, chain_type="map_reduce", callbacks=[my_callbacks])
summary = summarize(documents)
```
See #5572 for a similar surgical fix.

tagging @hwchase17 for callbacks work

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-03 20:12:34 -07:00
Alec Flett
404d103c41 propagate RetrievalQA chain callbacks through its own LLMChain and StuffDocumentsChain (#7853)
This is another case, similar to #5572 and #7565 where the callbacks are
getting dropped during construction of the chains.

tagging @hwchase17 and @agola11 for callbacks propagation

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-03 20:11:58 -07:00
Bal Narendra Sapa
47eea32f6a add serializer methods (#7914)
Description: I have added two methods serializer and deserializer
methods. There was method called save local but it saves the to the
local disk. I wanted the vectorstore in the format using which i can
push it to the sql database's blob field. I have used this while i was
working on something

@rlancemartin, @eyurtsev

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-03 20:10:35 -07:00
Ryan Sloan
b786335dd1 fix RecursiveUrlLoader (#8582)
Description: the recursive url loader does not fully crawl for all urls
under base url
Maintainer: @baskaryan
2023-08-03 16:51:57 -07:00
William FH
f81e613086 Fix Async Retry Event Handling (#8659)
It fails currently because the event loop is already running.

The `retry` decorator alraedy infers an `AsyncRetrying` handler for
coroutines (see [tenacity
line](aa6f8f0a24/tenacity/__init__.py (L535)))
However before_sleep always gets called synchronously (see [tenacity
line](aa6f8f0a24/tenacity/__init__.py (L338))).


Instead, check for a running loop and use that it exists. Of course,
it's running an async method synchronously which is not _nice_. Given
how important LLMs are, it may make sense to have a task list or
something but I'd want to chat with @nfcampos on where that would live.

This PR also fixes the unit tests to check the handler is called and to
make sure the async test is run (it looks like it's just been being
skipped). It would have failed prior to the proposed fixes but passes
now.
2023-08-03 15:02:16 -07:00
ruze
8ef7e14a85 RSS Feed / OPML loader (#8694)
Replace this comment with:
- Description: added a document loader for a list of RSS feeds or OPML.
It iterates through the list and uses NewsURLLoader to load each
article.
  - Issue: N/A
  - Dependencies: feedparser, listparser
  - Tag maintainer: @rlancemartin, @eyurtsev
  - Twitter handle: @ruze

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-03 14:58:06 -07:00
sumandeng
53e4148a1b add model_revison parameter to ModelScopeEmbeddings (#8669)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-03 14:17:48 -07:00
Yoshi
4e8f11b36a Deterministic Fake Embedding Model (#8706)
Solves #8644 
This embedding models output identical random embedding vectors, given
the input texts are identical.
Useful when used in unittest.
@baskaryan
2023-08-03 13:36:45 -07:00
Leonid Kuligin
2928a1a3c9 added minimum expected version of SDK to the error description (#8712)
#7932

Co-authored-by: Leonid Kuligin <kuligin@google.com>
2023-08-03 13:28:42 -07:00
Harrison Chase
814faa9de5 relax deps for yaml (#8713)
context: https://github.com/yaml/pyyaml/issues/724

I think this is fine? I don't think we use yaml too heavily
2023-08-03 13:22:17 -07:00
Holt Skinner
8a8917e0d9 feat: Add Spell Correction Spec to Google Cloud Enterprise Search connector (#8705) 2023-08-03 13:38:45 -04:00
Bagatur
b2b71b0d35 Bagatur/eden llm (#8670)
Co-authored-by: RedhaWassim <rwasssim@gmail.com>
Co-authored-by: KyrianC <ckyrian@protonmail.com>
Co-authored-by: sam <melaine.samy@gmail.com>
2023-08-03 10:24:51 -07:00
William FH
8022293124 lint (#8702) 2023-08-03 09:33:28 -07:00
axa99
1f54ec899b updated interface jupyter notebook explanations (#8689)
Updated the documentation in the interface.ipynb to clearly show the
_input_ and _output_ types for various components @baskaryan
2023-08-03 11:53:31 -04:00
William FH
a137492b53 Permit none key in chain mapper (#8696) 2023-08-03 08:50:36 -07:00
Bagatur
e283dc8d50 bump 251 (#8690) 2023-08-03 06:28:36 -07:00
Eugene Yurtsev
81e0cbf2d5 Minor typo fix (#8657)
Fix typo in doc-string.
2023-08-02 23:20:25 -07:00
Lance Martin
37aade19da Minor formatting and additional figure for summarization use case (#8663) 2023-08-02 21:52:29 -07:00
Harrison Chase
43dffe39fb Harrison/conversational retrieval agent (#8639)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-02 18:05:15 -07:00
ruze
71f98db2fe Newspaper (#8647)
- Description: Added newspaper3k based news article loader. Provide a
list of urls.
  - Issue: N/A
  - Dependencies: newspaper3k,
  - Tag maintainer: @rlancemartin , @eyurtsev 
  - Twitter handle: @ruze

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-02 17:56:08 -07:00
shibuiwilliam
f68f3b23d7 add missing RemoteLangChainRetriever _get_relevant_documents test (#8628)
# What
- Add missing RemoteLangChainRetriever _get_relevant_documents test

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-02 17:20:40 -07:00
William FH
206901fa01 Use salt instead of datetime (#8653)
If you want to kick off two runs at the same time it'll cause errors.
Use a uuid instead
2023-08-02 17:15:50 -07:00
William FH
7ea2b08d1f Use call directly for chain (#8655)
for run_on_dataset since the `run()` method requires a single output
2023-08-02 17:11:39 -07:00
William FH
368aa4ede7 fix enum error message (#8652)
could be a string so don't directly call value
2023-08-02 17:11:27 -07:00
millerick
5018af8839 docs: fix some grammar (#8654)
### Description
Fixes a grammar issue I noticed when reading through the documentation.

### Maintainers
@baskaryan

Co-authored-by: mmillerick <mmillerick@blend.com>
2023-08-02 16:48:01 -07:00
Erick Friis
96b0ff182e Enterprise support form wording (#8641) 2023-08-02 15:18:20 -07:00
Lance Martin
59194c2214 Add summarization use-case (#8376)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-02 14:25:11 -07:00
Will Thompson
ee1d13678e 🐛 Docs Fixes [2 one-liners, examples broken] (#8519)
## Description: 
   
1)Map reduce example in docs is missing an important import statement.
Figured other people would benefit from being able to copy 🍝 the code.

2)RefineDocumentsChain example also broken.

## Issue: 

None

## Dependencies:

None. One liner.

## Tag maintainer:

@baskaryan

## Twitter handle: 

I mean, it's a one line fix lol. But @will_thompson_k is my twitter
handle.
2023-08-02 13:39:41 -07:00
Leonid Ganeline
1335f2b9f8 MLflow examples (#8642)
Updated `MLflow` examples with links to the examples from MLflow

 @baskaryan
2023-08-02 13:30:28 -07:00
Kacper Łukawski
16551536e3 Refactor Qdrant integration (#8634)
This small PR introduces new parameters into Qdrant (`on_disk`), fixes
some tests and changes the error message to be more clear.

Tagging: @baskaryan, @rlancemartin, @eyurtsev
2023-08-02 10:30:18 -07:00
Erick Friis
c5fb3b6069 Enterprise support form in airtable (#8607) 2023-08-02 09:49:59 -07:00
Eugene Yurtsev
1ec0b18379 Re-add __add__ functionality for messages (revert #8245) (#8489)
This PR reverts #8245, so `__add__` is defined on base messages.

Resolves issue: https://github.com/langchain-ai/langchain/issues/8472
2023-08-02 10:51:44 -04:00
Bagatur
f31047a394 bump 250 (#8632) 2023-08-02 07:47:36 -07:00
Comendeiro
5c516945d0 Add local support for audio models (PR #7329) (#7591)
- Description: run the poetry dependencies
  - Issue: #7329 
  - Dependencies: any dependencies required for this change,
  - Tag maintainer: @rlancemartin

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-02 01:24:53 -07:00
Naveen Tatikonda
d2adec3818 [Opensearch] : Fix the service validation in http_auth (#8609)
### Description
OpenSearch supports validation using both Master Credentials (Username
and password) and IAM. For Master Credentials users will not pass the
argument `service` in `http_auth` and the existing code will break. To
fix this, I have updated the condition to check if service attribute is
present in http_auth before accessing it.

### Maintainers
@baskaryan @navneet1v

Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
2023-08-02 01:16:38 -07:00
Harrison Chase
7c5c0557cb cast to string when measuring token length (#8617) 2023-08-02 00:12:59 -07:00
rjanardhan3
68113348cc Fireworks integration (#8322)
Description - Integrates Fireworks within Langchain LLMs to allow users
to use Fireworks models with Langchain, mainly for summarization.

Issue - Not applicable
Dependencies - None
Tag maintainer - @rlancemartin

---------

Co-authored-by: Raj Janardhan <rajjanardhan@Rajs-Laptop.attlocal.net>
2023-08-01 21:17:26 -07:00
Bagatur
b574507c51 normalized openai embeddings embed_query (#8604)
we weren't normalizing when embedding queries
2023-08-01 17:12:10 -07:00
Neil Murphy
31820a31e4 Add firestore_client param to FirestoreChatMessageHistory if caller already has one; also lets them specify GCP project, etc. (#8601)
Existing implementation requires that you install `firebase-admin`
package, and prevents you from using an existing Firestore client
instance if available.

This adds optional `firestore_client` param to
`FirestoreChatMessageHistory`, so users can just use their existing
client/settings. If not passed, existing logic executes to initialize a
`firestore_client`.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-01 15:42:13 -07:00
Naveen Tatikonda
13ccf202de [OpenSearch] : Fix AOSS Initialization (#8600)
### Description
This PR fixes the AOSS Initialization in Opensearch.

### Maintainers
@rlancemartin, @eyurtsev, @navneet1v

Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
2023-08-01 15:33:51 -07:00
Joshua Carroll
6705928b9d Add StreamlitChatMessageHistory (#8497)
Add a StreamlitChatMessageHistory class that stores chat messages in
[Streamlit's Session
State](https://docs.streamlit.io/library/api-reference/session-state).

Note: The integration test uses a currently-experimental Streamlit
testing framework to simulate the execution of a Streamlit app. Marking
this PR as draft until I confirm with the Streamlit team that we're
comfortable supporting it.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-01 14:28:15 -07:00
Matt Robinson
8961c720b8 docs: update unstructured install instructions (#8596)
### Summary

Updates the `unstructured` install instructions. For
`unstructured>=0.9.0`, dependencies are broken out by document type and
the base `unstructured` package includes fewer dependencies. `pip
install "unstructured[local-inference]"` has been replace by `pip
install "unstructured[all-docs]"`, though the `local-inference` extra is
still supported for the time being.

### Reviewers

- @rlancemartin
- @eyurtsev
- @hwchase17
2023-08-01 14:17:49 -07:00
Bagatur
73072d3db8 mv (#8595) 2023-08-01 14:17:04 -07:00
brettdbrewer
2de028834f updated to use new llm_util query (#8591)
- Description: added memgraph_graph.py which defines the MemgraphGraph
class, subclassing off the existing Neo4jGraph class. This lets you
query the Memgraph graph database using natural language. It leverages
the Neo4j drivers and the bolt protocol.
- Dependencies: since it is a subclass off of Neo4jGraph, it is
dependent on it and the GraphCypherQA Chain implementations. It is
dependent on the Neo4j drivers being present. It is dependent on having
a running Memgraph instance to connect to.
  - Tag maintainer: @baskaryan
  - Twitter handle: @villageideate
- example usage can be seen in this repo
https://github.com/brettdbrewer/MemgraphGraph/

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-01 14:16:15 -07:00
Tesfagabir Meharizghi
a7000ee89e Callback handler for Amazon SageMaker Experiments (#8587)
## Description

This PR implements a callback handler for SageMaker Experiments which is
similar to that of mlflow.
* When creating the callback handler, it takes the experiment's run
object as an argument. All the callback outputs are then logged to the
run object.
* The output of each callback action (e.g., `on_llm_start`) is saved to
S3 bucket as json file.
* Optionally, you can also log additional information such as the LLM
hyper-parameters to the same run object.
* Once the callback object is no more needed, you will need to call the
`flush_tracker()` method. This makes sure that any intermediate files
are deleted.
* A separate notebook example is provided to show how the callback is
used.

@3coins  @agola11

---------

Co-authored-by: Tesfagabir Meharizghi <mehariz@amazon.com>
2023-08-01 13:47:08 -07:00
Harrison Chase
9c2b29a1cb Harrison/loader bug (#8559)
Co-authored-by: ddroghini <d.droghini@mflgroup.com>
Co-authored-by: Buckler89 <Droghini.diego@gmail.com>
2023-08-01 13:31:49 -07:00
Kristelle Widjaja
f190bc3e83 Bug fix: feature/issue-7804-chroma-client_settings-bug (#8267)
Description: Made Chroma constructor more robust when client_settings is
provided. Otherwise, existing embeddings will not be loaded correctly
from Chroma.
Issue: #7804
Dependencies: None
Tag maintainer: @rlancemartin, @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-01 13:31:35 -07:00
mpb159753
7df2dfc4c2 Add Support for Loading Documents from Huawei OBS (#8573)
Description:
This PR adds support for loading documents from Huawei OBS (Object
Storage Service) in Langchain. OBS is a cloud-based object storage
service provided by Huawei Cloud. With this enhancement, Langchain users
can now easily access and load documents stored in Huawei OBS directly
into the system.

Key Changes:
- Added a new document loader module specifically for Huawei OBS
integration.
- Implemented the necessary logic to authenticate and connect to Huawei
OBS using access credentials.
- Enabled the loading of individual documents from a specified bucket
and object key in Huawei OBS.
- Provided the option to specify custom authentication information or
obtain security tokens from Huawei Cloud ECS for easy access.

How to Test:
1. Ensure the required package "esdk-obs-python" is installed.
2. Configure the endpoint, access key, secret key, and bucket details
for Huawei OBS in the Langchain settings.
3. Load documents from Huawei OBS using the updated document loader
module.
4. Verify that documents are successfully retrieved and loaded into
Langchain for further processing.

Please review this PR and let us know if any further improvements are
needed. Your feedback is highly appreciated!

@rlancemartin, @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-01 09:30:30 -07:00
Leonid Ganeline
ed9a0f8185 Docstrings: Module descriptions (#8262)
Added/changed the module descriptions (the firs-line docstrings in the
`__init__` files).
Added class hierarchy info.
 @baskaryan
2023-08-01 09:12:32 -07:00
shibuiwilliam
465faab935 fix apparent spelling inconsistencies (#8574)
Use ImportErrors where appropriate
2023-08-01 09:09:09 -07:00
Nuno Campos
0ec020698f Add new run types for Runnables (#8488)
- allow overriding run_type in on_chain_start

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-08-01 12:56:40 +01:00
673 changed files with 37302 additions and 10885 deletions

View File

@@ -1,28 +1,20 @@
<!-- Thank you for contributing to LangChain!
Replace this comment with:
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer (see below),
- Twitter handle: we announce bigger features on Twitter. If your PR gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use.
2. an example notebook showing its use. These live is docs/extras directory.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the same people again.
See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->

View File

@@ -1,5 +1,5 @@
---
name: libs/langchain-experimental CI
name: libs/experimental CI
on:
push:

View File

@@ -1,5 +1,5 @@
---
name: libs/langchain-experimental Release
name: libs/experimental Release
on:
pull_request:

42
.github/workflows/scheduled_test.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
name: Scheduled tests
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
schedule:
- cron: '0 13 * * *'
env:
POETRY_VERSION: "1.4.2"
jobs:
build:
defaults:
run:
working-directory: libs/langchain
runs-on: ubuntu-latest
environment: Scheduled testing
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: "1.4.2"
working-directory: libs/langchain
install-command: |
echo "Running scheduled tests, installing dependencies with poetry..."
poetry install --with=test_integration
- name: Run tests
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
make scheduled_tests
shell: bash

View File

@@ -43,7 +43,12 @@ spell_fix:
help:
@echo '----'
@echo 'coverage - run unit tests and generate coverage report'
@echo 'clean - run docs_clean and api_docs_clean'
@echo 'docs_build - build the documentation'
@echo 'docs_clean - clean the documentation build artifacts'
@echo 'docs_linkcheck - run linkchecker on the documentation'
@echo 'api_docs_build - build the API Reference documentation'
@echo 'api_docs_clean - clean the API Reference documentation build artifacts'
@echo 'api_docs_linkcheck - run linkchecker on the API Reference documentation'
@echo 'spell_check - run codespell on the project'
@echo 'spell_fix - run codespell on the project and fix the errors'

View File

@@ -18,10 +18,10 @@
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/hwchase17/langchainjs).
**Production Support:** As you move your LangChains into production, we'd love to offer more comprehensive support.
Please fill out [this form](https://6w1pwbss0py.typeform.com/to/rrbrdTH2) and we'll set up a dedicated support Slack channel.
**Production Support:** As you move your LangChains into production, we'd love to offer more hands-on support.
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to share more about what you're building, and our team will get in touch.
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28/23
In an effort to make `langchain` leaner and safer, we are moving select chains to `langchain_experimental`.
This migration has already started, but we are remaining backwards compatible until 7/28.

View File

@@ -100,6 +100,9 @@ extensions = [
]
source_suffix = [".rst"]
# some autodoc pydantic options are repeated in the actual template.
# potentially user error, but there may be bugs in the sphinx extension
# with options not being passed through correctly (from either the location in the code)
autodoc_pydantic_model_show_json = False
autodoc_pydantic_field_list_validators = False
autodoc_pydantic_config_members = False
@@ -112,13 +115,6 @@ autodoc_member_order = "groupwise"
autoclass_content = "both"
autodoc_typehints_format = "short"
autodoc_default_options = {
"members": True,
"show-inheritance": True,
"inherited-members": "BaseModel",
"undoc-members": True,
"special-members": "__call__",
}
# autodoc_typehints = "description"
# Add any paths that contain templates here, relative to this directory.
templates_path = ["templates"]

View File

@@ -1,49 +1,215 @@
"""Script for auto-generating api_reference.rst"""
import glob
import re
"""Script for auto-generating api_reference.rst."""
import importlib
import inspect
import typing
from pathlib import Path
from typing import TypedDict, Sequence, List, Dict, Literal, Union
from enum import Enum
from pydantic import BaseModel
ROOT_DIR = Path(__file__).parents[2].absolute()
HERE = Path(__file__).parent
PKG_DIR = ROOT_DIR / "libs" / "langchain" / "langchain"
EXP_DIR = ROOT_DIR / "libs" / "experimental" / "langchain_experimental"
WRITE_FILE = Path(__file__).parent / "api_reference.rst"
EXP_WRITE_FILE = Path(__file__).parent / "experimental_api_reference.rst"
WRITE_FILE = HERE / "api_reference.rst"
EXP_WRITE_FILE = HERE / "experimental_api_reference.rst"
def load_members(dir: Path) -> dict:
members: dict = {}
for py in glob.glob(str(dir) + "/**/*.py", recursive=True):
module = py[len(str(dir)) + 1 :].replace(".py", "").replace("/", ".")
top_level = module.split(".")[0]
if top_level not in members:
members[top_level] = {"classes": [], "functions": []}
with open(py, "r") as f:
for line in f.readlines():
cls = re.findall(r"^class ([^_].*)\(", line)
members[top_level]["classes"].extend([module + "." + c for c in cls])
func = re.findall(r"^def ([^_].*)\(", line)
afunc = re.findall(r"^async def ([^_].*)\(", line)
func_strings = [module + "." + f for f in func + afunc]
members[top_level]["functions"].extend(func_strings)
return members
ClassKind = Literal["TypedDict", "Regular", "Pydantic", "enum"]
def construct_doc(pkg: str, members: dict) -> str:
class ClassInfo(TypedDict):
"""Information about a class."""
name: str
"""The name of the class."""
qualified_name: str
"""The fully qualified name of the class."""
kind: ClassKind
"""The kind of the class."""
is_public: bool
"""Whether the class is public or not."""
class FunctionInfo(TypedDict):
"""Information about a function."""
name: str
"""The name of the function."""
qualified_name: str
"""The fully qualified name of the function."""
is_public: bool
"""Whether the function is public or not."""
class ModuleMembers(TypedDict):
"""A dictionary of module members."""
classes_: Sequence[ClassInfo]
functions: Sequence[FunctionInfo]
def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
"""Load all members of a module.
Args:
module_path: Path to the module.
namespace: the namespace of the module.
Returns:
list: A list of loaded module objects.
"""
classes_: List[ClassInfo] = []
functions: List[FunctionInfo] = []
module = importlib.import_module(module_path)
for name, type_ in inspect.getmembers(module):
if not hasattr(type_, "__module__"):
continue
if type_.__module__ != module_path:
continue
if inspect.isclass(type_):
if type(type_) == typing._TypedDictMeta: # type: ignore
kind: ClassKind = "TypedDict"
elif issubclass(type_, Enum):
kind = "enum"
elif issubclass(type_, BaseModel):
kind = "Pydantic"
else:
kind = "Regular"
classes_.append(
ClassInfo(
name=name,
qualified_name=f"{namespace}.{name}",
kind=kind,
is_public=not name.startswith("_"),
)
)
elif inspect.isfunction(type_):
functions.append(
FunctionInfo(
name=name,
qualified_name=f"{namespace}.{name}",
is_public=not name.startswith("_"),
)
)
else:
continue
return ModuleMembers(
classes_=classes_,
functions=functions,
)
def _merge_module_members(
module_members: Sequence[ModuleMembers],
) -> ModuleMembers:
"""Merge module members."""
classes_: List[ClassInfo] = []
functions: List[FunctionInfo] = []
for module in module_members:
classes_.extend(module["classes_"])
functions.extend(module["functions"])
return ModuleMembers(
classes_=classes_,
functions=functions,
)
def _load_package_modules(
package_directory: Union[str, Path]
) -> Dict[str, ModuleMembers]:
"""Recursively load modules of a package based on the file system.
Traversal based on the file system makes it easy to determine which
of the modules/packages are part of the package vs. 3rd party or built-in.
Parameters:
package_directory: Path to the package directory.
Returns:
list: A list of loaded module objects.
"""
package_path = (
Path(package_directory)
if isinstance(package_directory, str)
else package_directory
)
modules_by_namespace = {}
package_name = package_path.name
for file_path in package_path.rglob("*.py"):
if file_path.name.startswith("_"):
continue
relative_module_name = file_path.relative_to(package_path)
if relative_module_name.name.startswith("_"):
continue
# Get the full namespace of the module
namespace = str(relative_module_name).replace(".py", "").replace("/", ".")
# Keep only the top level namespace
top_namespace = namespace.split(".")[0]
try:
module_members = _load_module_members(
f"{package_name}.{namespace}", namespace
)
# Merge module members if the namespace already exists
if top_namespace in modules_by_namespace:
existing_module_members = modules_by_namespace[top_namespace]
_module_members = _merge_module_members(
[existing_module_members, module_members]
)
else:
_module_members = module_members
modules_by_namespace[top_namespace] = _module_members
except ImportError as e:
print(f"Error: Unable to import module '{namespace}' with error: {e}")
return modules_by_namespace
def _construct_doc(pkg: str, members_by_namespace: Dict[str, ModuleMembers]) -> str:
"""Construct the contents of the reference.rst file for the given package.
Args:
pkg: The package name
members_by_namespace: The members of the package, dict organized by top level
module contains a list of classes and functions
inside of the top level namespace.
Returns:
The contents of the reference.rst file.
"""
full_doc = f"""\
=============
=======================
``{pkg}`` API Reference
=============
=======================
"""
for module, _members in sorted(members.items(), key=lambda kv: kv[0]):
classes = _members["classes"]
namespaces = sorted(members_by_namespace)
for module in namespaces:
_members = members_by_namespace[module]
classes = _members["classes_"]
functions = _members["functions"]
if not (classes or functions):
continue
section = f":mod:`{pkg}.{module}`"
underline = "=" * (len(section) + 1)
full_doc += f"""\
{section}
{'=' * (len(section) + 1)}
{underline}
.. automodule:: {pkg}.{module}
:no-members:
@@ -52,7 +218,6 @@ def construct_doc(pkg: str, members: dict) -> str:
"""
if classes:
cstring = "\n ".join(sorted(classes))
full_doc += f"""\
Classes
--------------
@@ -60,13 +225,31 @@ Classes
.. autosummary::
:toctree: {module}
:template: class.rst
{cstring}
"""
for class_ in classes:
if not class_["is_public"]:
continue
if class_["kind"] == "TypedDict":
template = "typeddict.rst"
elif class_["kind"] == "enum":
template = "enum.rst"
elif class_["kind"] == "Pydantic":
template = "pydantic.rst"
else:
template = "class.rst"
full_doc += f"""\
:template: {template}
{class_["qualified_name"]}
"""
if functions:
fstring = "\n ".join(sorted(functions))
_functions = [f["qualified_name"] for f in functions if f["is_public"]]
fstring = "\n ".join(sorted(_functions))
full_doc += f"""\
Functions
--------------
@@ -83,12 +266,15 @@ Functions
def main() -> None:
lc_members = load_members(PKG_DIR)
lc_doc = ".. _api_reference:\n\n" + construct_doc("langchain", lc_members)
"""Generate the reference.rst file for each package."""
lc_members = _load_package_modules(PKG_DIR)
lc_doc = ".. _api_reference:\n\n" + _construct_doc("langchain", lc_members)
with open(WRITE_FILE, "w") as f:
f.write(lc_doc)
exp_members = load_members(EXP_DIR)
exp_doc = ".. _experimental_api_reference:\n\n" + construct_doc("langchain_experimental", exp_members)
exp_members = _load_package_modules(EXP_DIR)
exp_doc = ".. _experimental_api_reference:\n\n" + _construct_doc(
"langchain_experimental", exp_members
)
with open(EXP_WRITE_FILE, "w") as f:
f.write(exp_doc)

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,5 @@
-e libs/langchain
-e libs/experimental
autodoc_pydantic==1.8.0
myst_parser
nbsphinx==0.8.9
@@ -10,4 +11,4 @@ sphinx-panels
toml
myst_nb
sphinx_copybutton
pydata-sphinx-theme==0.13.1
pydata-sphinx-theme==0.13.1

View File

@@ -5,17 +5,6 @@
.. autoclass:: {{ objname }}
{% block methods %}
{% if methods %}
.. rubric:: {{ _('Methods') }}
.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block attributes %}
{% if attributes %}
.. rubric:: {{ _('Attributes') }}
@@ -27,4 +16,21 @@
{% endif %}
{% endblock %}
{% block methods %}
{% if methods %}
.. rubric:: {{ _('Methods') }}
.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
{%- endfor %}
{% for item in methods %}
.. automethod:: {{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -0,0 +1,14 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block attributes %}
{% for item in attributes %}
.. autoattribute:: {{ item }}
{% endfor %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -0,0 +1,22 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autopydantic_model:: {{ objname }}
:model-show-json: False
:model-show-config-summary: False
:model-show-validator-members: False
:model-show-field-summary: False
:field-signature-prefix: param
:members:
:undoc-members:
:inherited-members:
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
{% block attributes %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -0,0 +1,14 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block attributes %}
{% for item in attributes %}
.. autoattribute:: {{ item }}
{% endfor %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -19,7 +19,7 @@
{% block htmltitle %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{% endblock %}
<link rel="canonical" href="http://scikit-learn.org/stable/{{pagename}}.html" />
<link rel="canonical" href="https://api.python.langchain.com/en/latest/{{pagename}}.html" />
{% if favicon_url %}
<link rel="shortcut icon" href="{{ favicon_url|e }}"/>

View File

@@ -6,17 +6,6 @@
{%- set top_container_cls = "sk-landing-container" %}
{%- endif %}
{% if theme_link_to_live_contributing_page|tobool %}
{# Link to development page for live builds #}
{%- set development_link = "https://scikit-learn.org/dev/developers/index.html" %}
{# Open on a new development page in new window/tab for live builds #}
{%- set development_attrs = 'target="_blank" rel="noopener noreferrer"' %}
{%- else %}
{%- set development_link = pathto('developers/index') %}
{%- set development_attrs = '' %}
{%- endif %}
<nav id="navbar" class="{{ nav_bar_class }} navbar navbar-expand-md navbar-light bg-light py-0">
<div class="container-fluid {{ top_container_cls }} px-0">
{%- if logo_url %}

View File

@@ -0,0 +1,54 @@
# Community Navigator
Hi! Thanks for being here. Were lucky to have a community of so many passionate developers building with LangChainwe have so much to teach and learn from each other. Community members contribute code, host meetups, write blog posts, amplify each others work, become each other's customers and collaborators, and so much more.
Whether youre new to LangChain, looking to go deeper, or just want to get more exposure to the world of building with LLMs, this page can point you in the right direction.
- **🦜 Contribute to LangChain**
- **🌍 Meetups, Events, and Hackathons**
- **📣 Help Us Amplify Your Work**
- **💬 Stay in the loop**
# 🦜 Contribute to LangChain
LangChain is the product of over 5,000+ contributions by 1,500+ contributors, and there is ******still****** so much to do together. Here are some ways to get involved:
- **[Open a pull request](https://github.com/langchain-ai/langchain/issues):** wed appreciate all forms of contributionsnew features, infrastructure improvements, better documentation, bug fixes, etc. If you have an improvement or an idea, wed love to work on it with you.
- **[Read our contributor guidelines:](https://github.com/langchain-ai/langchain/blob/bbd22b9b761389a5e40fc45b0570e1830aabb707/.github/CONTRIBUTING.md)** We ask contributors to follow a ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow, run a few local checks for formatting, linting, and testing before submitting, and follow certain documentation and testing conventions.
- **First time contributor?** [Try one of these PRs with the “good first issue” tag](https://github.com/langchain-ai/langchain/contribute).
- **Become an expert:** our experts help the community by answering product questions in Discord. If thats a role youd like to play, wed be so grateful! (And we have some special experts-only goodies/perks we can tell you more about). Send us an email to introduce yourself at hello@langchain.dev and well take it from there!
- **Integrate with LangChain:** if your product integrates with LangChainor aspires towe want to help make sure the experience is as smooth as possible for you and end users. Send us an email at hello@langchain.dev and tell us what youre working on.
- **Become an Integration Maintainer:** Partner with our team to ensure your integration stays up-to-date and talk directly with users (and answer their inquiries) in our Discord. Introduce yourself at hello@langchain.dev if youd like to explore this role.
# 🌍 Meetups, Events, and Hackathons
One of our favorite things about working in AI is how much enthusiasm there is for building together. We want to help make that as easy and impactful for you as possible!
- **Find a meetup, hackathon, or webinar:** you can find the one for you on on our [global events calendar](https://mirror-feeling-d80.notion.site/0bc81da76a184297b86ca8fc782ee9a3?v=0d80342540df465396546976a50cfb3f).
- **Submit an event to our calendar:** email us at events@langchain.dev with a link to your event page! We can also help you spread the word with our local communities.
- **Host a meetup:** If you want to bring a group of builders together, we want to help! We can publicize your event on our event calendar/Twitter, share with our local communities in Discord, send swag, or potentially hook you up with a sponsor. Email us at events@langchain.dev to tell us about your event!
- **Become a meetup sponsor:** we often hear from groups of builders that want to get together, but are blocked or limited on some dimension (space to host, budget for snacks, prizes to distribute, etc.). If youd like to help, send us an email to events@langchain.dev we can share more about how it works!
- **Speak at an event:** meetup hosts are always looking for great speakers, presenters, and panelists. If youd like to do that at an event, send us an email to hello@langchain.dev with more information about yourself, what you want to talk about, and what city youre based in and well try to match you with an upcoming event!
- **Tell us about your LLM community:** If you host or participate in a community that would welcome support from LangChain and/or our team, send us an email at hello@langchain.dev and let us know how we can help.
# 📣 Help Us Amplify Your Work
If youre working on something youre proud of, and think the LangChain community would benefit from knowing about it, we want to help you show it off.
- **Post about your work and mention us:** we love hanging out on Twitter to see what people in the space are talking about and working on. If you tag [@langchainai](https://twitter.com/LangChainAI), well almost certainly see it and can show you some love.
- **Publish something on our blog:** if youre writing about your experience building with LangChain, wed love to post (or crosspost) it on our blog! E-mail hello@langchain.dev with a draft of your post! Or even an idea for something you want to write about.
- **Get your product onto our [integrations hub](https://integrations.langchain.com/):** Many developers take advantage of our seamless integrations with other products, and come to our integrations hub to find out who those are. If you want to get your product up there, tell us about it (and how it works with LangChain) at hello@langchain.dev.
# ☀️ Stay in the loop
Heres where our team hangs out, talks shop, spotlights cool work, and shares what were up to. Wed love to see you there too.
- **[Twitter](https://twitter.com/LangChainAI):** we post about what were working on and what cool things were seeing in the space. If you tag @langchainai in your post, well almost certainly see it, and can snow you some love!
- **[Discord](https://discord.gg/6adMQxSpJS):** connect with with >30k developers who are building with LangChain
- **[GitHub](https://github.com/langchain-ai/langchain):** open pull requests, contribute to a discussion, and/or contribute
- **[Subscribe to our bi-weekly Release Notes](https://6w1pwbss0py.typeform.com/to/KjZB1auB):** a twice/month email roundup of the coolest things going on in our orbit
- **Slack:** if youre building an application in production at your company, wed love to get into a Slack channel together. Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) and well get in touch about setting one up.

View File

@@ -8,9 +8,9 @@ import DocCardList from "@theme/DocCardList";
Building applications with language models involves many moving parts. One of the most critical components is ensuring that the outcomes produced by your models are reliable and useful across a broad array of inputs, and that they work well with your application's other software components. Ensuring reliability usually boils down to some combination of application design, testing & evaluation, and runtime checks.
The guides in this section review the APIs and functionality LangChain provides to help yous better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes.
The guides in this section review the APIs and functionality LangChain provides to help you better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes.
LangChain offers various types of evaluators to help you measure performance and integrity on diverse data, and we hope to encourage the the community to create and share other useful evaluators so everyone can improve. These docs will introduce the evaluator types, how to use them, and provide some examples of their use in real-world scenarios.
LangChain offers various types of evaluators to help you measure performance and integrity on diverse data, and we hope to encourage the community to create and share other useful evaluators so everyone can improve. These docs will introduce the evaluator types, how to use them, and provide some examples of their use in real-world scenarios.
Each evaluator type in LangChain comes with ready-to-use implementations and an extensible API that allows for customization according to your unique requirements. Here are some of the types of evaluators we offer:

View File

@@ -5,8 +5,8 @@ import DocCardList from "@theme/DocCardList";
LangSmith helps you trace and evaluate your language model applications and intelligent agents to help you
move from prototype to production.
Check out the [interactive walkthrough](walkthrough) below to get started.
Check out the [interactive walkthrough](/docs/guides/langsmith/walkthrough) below to get started.
For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/)
<DocCardList />
<DocCardList />

View File

@@ -12,7 +12,7 @@ Here are the agents available in LangChain.
### [Zero-shot ReAct](/docs/modules/agents/agent_types/react.html)
This agent uses the [ReAct](https://arxiv.org/pdf/2205.00445.pdf) framework to determine which tool to use
This agent uses the [ReAct](https://arxiv.org/pdf/2210.03629) framework to determine which tool to use
based solely on the tool's description. Any number of tools can be provided.
This agent requires that a description is provided for each tool.
@@ -28,7 +28,7 @@ navigating around a browser.
### [OpenAI Functions](/docs/modules/agents/agent_types/openai_functions_agent.html)
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been explicitly fine-tuned to detect when a
function should to be called and respond with the inputs that should be passed to the function.
function should be called and respond with the inputs that should be passed to the function.
The OpenAI Functions Agent is designed to work with these models.
### [Conversational](/docs/modules/agents/agent_types/chat_conversation_agent.html)

View File

@@ -1,6 +1,6 @@
# OpenAI functions
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should to be called and respond with the inputs that should be passed to the function.
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function.
In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions.
The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.

View File

@@ -18,5 +18,3 @@ Let chains choose which tools to use given high-level directives
Persist application state between runs of a chain
#### [Callbacks](/docs/modules/callbacks/)
Log and stream intermediate steps of any chain
#### [Evaluation](/docs/modules/evaluation/)
Evaluate the performance of a chain.

View File

@@ -3,10 +3,12 @@ sidebar_position: 0
---
# Prompts
The new way of programming models is through prompts.
A **prompt** refers to the input to the model.
This input is often constructed from multiple components.
LangChain provides several classes and functions to make constructing and working with prompts easy.
A prompt for a language model is a set of instructions or input provided by a user to
guide the model's response, helping it understand the context and generate relevant
and coherent language-based output, such as answering questions, completing sentences,
or engaging in a conversation.
- [Prompt templates](/docs/modules/model_io/prompts/prompt_templates/): Parametrize model inputs
LangChain provides several classes and functions to help construct and work with prompts.
- [Prompt templates](/docs/modules/model_io/prompts/prompt_templates/): Parametrized model inputs
- [Example selectors](/docs/modules/model_io/prompts/example_selectors/): Dynamically select examples to include in prompts

View File

@@ -4,18 +4,15 @@ sidebar_position: 0
# Prompt templates
Language models take text as input - that text is commonly referred to as a prompt.
Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
LangChain provides several classes and functions to make constructing and working with prompts easy.
Prompt templates are pre-defined recipes for generating prompts for language models.
## What is a prompt template?
A template may include instructions, few shot examples, and specific context and
questions appropriate for a given task.
A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt.
LangChain provides tooling to create and work with prompt templates.
A prompt template can contain:
- instructions to the language model,
- a set of few shot examples to help the language model generate a better response,
- a question to the language model.
LangChain strives to create model agnostic templates to make it easy to reuse
existing templates across different language models.
import GetStarted from "@snippets/modules/model_io/prompts/prompt_templates/get_started.mdx"

View File

@@ -1,9 +0,0 @@
---
sidebar_position: 0
---
# API chains
APIChain enables using LLMs to interact with APIs to retrieve relevant information. Construct the chain by providing a question relevant to the provided API documentation.
import Example from "@snippets/modules/chains/popular/api.mdx"
<Example/>

View File

@@ -1,8 +0,0 @@
# Summarization
A summarization chain can be used to summarize multiple documents. One way is to input multiple smaller documents, after they have been divided into chunks, and operate over them with a MapReduceDocumentsChain. You can also choose instead for the chain that does summarization to be a StuffDocumentsChain, or a RefineDocumentsChain.
import Example from "@snippets/modules/chains/popular/summarize.mdx"
<Example/>

View File

@@ -0,0 +1,9 @@
---
sidebar_position: 3
---
# Web Scraping
Web scraping has historically been a challenging endeavor due to the ever-changing nature of website structures, making it tedious for developers to maintain their scraping scripts. Traditional methods often rely on specific HTML tags and patterns which, when altered, can disrupt data extraction processes.
Enter the LLM-based method for parsing HTML: By leveraging the capabilities of LLMs, and especially OpenAI Functions in LangChain's extraction chain, developers can instruct the model to extract only the desired data in a specified format. This method not only streamlines the extraction process but also significantly reduces the time spent on manual debugging and script modifications. Its adaptability means that even if websites undergo significant design changes, the extraction remains consistent and robust. This level of resilience translates to reduced maintenance efforts, cost savings, and ensures a higher quality of extracted data. Compared to its predecessors, LLM-based approach wins out the web scraping domain by transforming a historically cumbersome task into a more automated and efficient process.

View File

@@ -128,6 +128,10 @@ const config = {
hideable: true,
},
},
colorMode: {
disableSwitch: false,
respectPrefersColorScheme: true,
},
prism: {
theme: {
...baseLightCodeBlockTheme,

View File

@@ -12,7 +12,7 @@
"@docusaurus/preset-classic": "2.4.0",
"@docusaurus/remark-plugin-npm2yarn": "^2.4.0",
"@mdx-js/react": "^1.6.22",
"@mendable/search": "^0.0.125",
"@mendable/search": "^0.0.150",
"clsx": "^1.2.1",
"json-loader": "^0.5.7",
"process": "^0.11.10",
@@ -3212,10 +3212,11 @@
}
},
"node_modules/@mendable/search": {
"version": "0.0.125",
"resolved": "https://registry.npmjs.org/@mendable/search/-/search-0.0.125.tgz",
"integrity": "sha512-Mb1J3zDhOyBZV9cXqJocSOBNYGpe8+LQDqd9n9laPWxosSJcSTUewqtlIbMerrYsScBsxskoSiWgRsc7xF5z0Q==",
"version": "0.0.150",
"resolved": "https://registry.npmjs.org/@mendable/search/-/search-0.0.150.tgz",
"integrity": "sha512-Eb5SeAWlMxzEim/8eJ/Ysn01Pyh39xlPBzRBw/5OyOBhti0HVLXk4wd1Fq2TKgJC2ppQIvhEKO98PUcj9dNDFw==",
"dependencies": {
"html-react-parser": "^4.2.0",
"posthog-js": "^1.45.1"
},
"peerDependencies": {
@@ -8332,6 +8333,33 @@
"safe-buffer": "~5.1.0"
}
},
"node_modules/html-dom-parser": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/html-dom-parser/-/html-dom-parser-4.0.0.tgz",
"integrity": "sha512-TUa3wIwi80f5NF8CVWzkopBVqVAtlawUzJoLwVLHns0XSJGynss4jiY0mTWpiDOsuyw+afP+ujjMgRh9CoZcXw==",
"dependencies": {
"domhandler": "5.0.3",
"htmlparser2": "9.0.0"
}
},
"node_modules/html-dom-parser/node_modules/htmlparser2": {
"version": "9.0.0",
"resolved": "https://registry.npmjs.org/htmlparser2/-/htmlparser2-9.0.0.tgz",
"integrity": "sha512-uxbSI98wmFT/G4P2zXx4OVx04qWUmyFPrD2/CNepa2Zo3GPNaCaaxElDgwUrwYWkK1nr9fft0Ya8dws8coDLLQ==",
"funding": [
"https://github.com/fb55/htmlparser2?sponsor=1",
{
"type": "github",
"url": "https://github.com/sponsors/fb55"
}
],
"dependencies": {
"domelementtype": "^2.3.0",
"domhandler": "^5.0.3",
"domutils": "^3.1.0",
"entities": "^4.5.0"
}
},
"node_modules/html-entities": {
"version": "2.4.0",
"resolved": "https://registry.npmjs.org/html-entities/-/html-entities-2.4.0.tgz",
@@ -8375,6 +8403,20 @@
"node": ">= 12"
}
},
"node_modules/html-react-parser": {
"version": "4.2.0",
"resolved": "https://registry.npmjs.org/html-react-parser/-/html-react-parser-4.2.0.tgz",
"integrity": "sha512-gzU55AS+FI6qD7XaKe5BLuLFM2Xw0/LodfMWZlxV9uOHe7LCD5Lukx/EgYuBI3c0kLu0XlgFXnSzO0qUUn3Vrg==",
"dependencies": {
"domhandler": "5.0.3",
"html-dom-parser": "4.0.0",
"react-property": "2.0.0",
"style-to-js": "1.1.3"
},
"peerDependencies": {
"react": "0.14 || 15 || 16 || 17 || 18"
}
},
"node_modules/html-tags": {
"version": "3.3.1",
"resolved": "https://registry.npmjs.org/html-tags/-/html-tags-3.3.1.tgz",
@@ -11762,6 +11804,11 @@
"webpack": ">=4.41.1 || 5.x"
}
},
"node_modules/react-property": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/react-property/-/react-property-2.0.0.tgz",
"integrity": "sha512-kzmNjIgU32mO4mmH5+iUyrqlpFQhF8K2k7eZ4fdLSOPFrD1XgEuSBv9LDEgxRXTMBqMd8ppT0x6TIzqE5pdGdw=="
},
"node_modules/react-router": {
"version": "5.3.4",
"resolved": "https://registry.npmjs.org/react-router/-/react-router-5.3.4.tgz",
@@ -13127,6 +13174,22 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/style-to-js": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/style-to-js/-/style-to-js-1.1.3.tgz",
"integrity": "sha512-zKI5gN/zb7LS/Vm0eUwjmjrXWw8IMtyA8aPBJZdYiQTXj4+wQ3IucOLIOnF7zCHxvW8UhIGh/uZh/t9zEHXNTQ==",
"dependencies": {
"style-to-object": "0.4.1"
}
},
"node_modules/style-to-js/node_modules/style-to-object": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/style-to-object/-/style-to-object-0.4.1.tgz",
"integrity": "sha512-HFpbb5gr2ypci7Qw+IOhnP2zOU7e77b+rzM+wTzXzfi1PrtBCX0E7Pk4wL4iTLnhzZ+JgEGAhX81ebTg/aYjQw==",
"dependencies": {
"inline-style-parser": "0.1.1"
}
},
"node_modules/style-to-object": {
"version": "0.3.0",
"resolved": "https://registry.npmjs.org/style-to-object/-/style-to-object-0.3.0.tgz",

View File

@@ -23,7 +23,7 @@
"@docusaurus/preset-classic": "2.4.0",
"@docusaurus/remark-plugin-npm2yarn": "^2.4.0",
"@mdx-js/react": "^1.6.22",
"@mendable/search": "^0.0.125",
"@mendable/search": "^0.0.150",
"clsx": "^1.2.1",
"json-loader": "^0.5.7",
"process": "^0.11.10",

View File

@@ -75,6 +75,7 @@ module.exports = {
slug: "additional_resources",
},
},
'community'
],
integrations: [
{

Binary file not shown.

After

Width:  |  Height:  |  Size: 405 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 471 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 520 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 307 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 211 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 266 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 716 KiB

View File

@@ -556,6 +556,14 @@
"source": "/docs/integrations/llamacpp",
"destination": "/docs/integrations/providers/llamacpp"
},
{
"source": "/en/latest/integrations/log10.html",
"destination": "/docs/integrations/providers/log10"
},
{
"source": "/docs/integrations/log10",
"destination": "/docs/integrations/providers/log10"
},
{
"source": "/en/latest/integrations/mediawikidump.html",
"destination": "/docs/integrations/providers/mediawikidump"
@@ -3951,6 +3959,10 @@
{
"source": "/docs/modules/chains/additional/tagging",
"destination": "/docs/use_cases/tagging"
},
{
"source": "docs/integrations/providers/agent_with_wandb_tracing",
"destination": "docs/integrations/providers/wandb_tracing"
}
]
}

View File

@@ -0,0 +1,323 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "700a516b",
"metadata": {},
"source": [
"# OpenAI Adapter\n",
"\n",
"A lot of people get started with OpenAI but want to explore other models. LangChain's integrations with many model providers make this easy to do so. While LangChain has it's own message and model APIs, we've also made it as easy as possible to explore other models by exposing an adapter to adapt LangChain models to the OpenAI api.\n",
"\n",
"At the moment this only deals with output and does not return other information (token counts, stop reasons, etc)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6017f26a",
"metadata": {},
"outputs": [],
"source": [
"import openai\n",
"from langchain.adapters import openai as lc_openai"
]
},
{
"cell_type": "markdown",
"id": "b522ceda",
"metadata": {},
"source": [
"## ChatCompletion.create"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "1d22eb61",
"metadata": {},
"outputs": [],
"source": [
"messages = [{\"role\": \"user\", \"content\": \"hi\"}]"
]
},
{
"cell_type": "markdown",
"id": "d550d3ad",
"metadata": {},
"source": [
"Original OpenAI call"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e1d27dfa",
"metadata": {},
"outputs": [],
"source": [
"result = openai.ChatCompletion.create(\n",
" messages=messages, \n",
" model=\"gpt-3.5-turbo\", \n",
" temperature=0\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "012d81ae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'role': 'assistant', 'content': 'Hello! How can I assist you today?'}"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"choices\"][0]['message'].to_dict_recursive()"
]
},
{
"cell_type": "markdown",
"id": "db5b5500",
"metadata": {},
"source": [
"LangChain OpenAI wrapper call"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "87c2d515",
"metadata": {},
"outputs": [],
"source": [
"lc_result = lc_openai.ChatCompletion.create(\n",
" messages=messages, \n",
" model=\"gpt-3.5-turbo\", \n",
" temperature=0\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "c67a5ac8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'role': 'assistant', 'content': 'Hello! How can I assist you today?'}"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"lc_result[\"choices\"][0]['message']"
]
},
{
"cell_type": "markdown",
"id": "034ba845",
"metadata": {},
"source": [
"Swapping out model providers"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "7a2c011c",
"metadata": {},
"outputs": [],
"source": [
"lc_result = lc_openai.ChatCompletion.create(\n",
" messages=messages, \n",
" model=\"claude-2\", \n",
" temperature=0, \n",
" provider=\"ChatAnthropic\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "f7c94827",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'role': 'assistant', 'content': ' Hello!'}"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"lc_result[\"choices\"][0]['message']"
]
},
{
"cell_type": "markdown",
"id": "cb3f181d",
"metadata": {},
"source": [
"## ChatCompletion.stream"
]
},
{
"cell_type": "markdown",
"id": "f7b8cd18",
"metadata": {},
"source": [
"Original OpenAI call"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "fd8cb1ea",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'role': 'assistant', 'content': ''}\n",
"{'content': 'Hello'}\n",
"{'content': '!'}\n",
"{'content': ' How'}\n",
"{'content': ' can'}\n",
"{'content': ' I'}\n",
"{'content': ' assist'}\n",
"{'content': ' you'}\n",
"{'content': ' today'}\n",
"{'content': '?'}\n",
"{}\n"
]
}
],
"source": [
"for c in openai.ChatCompletion.create(\n",
" messages = messages,\n",
" model=\"gpt-3.5-turbo\", \n",
" temperature=0,\n",
" stream=True\n",
"):\n",
" print(c[\"choices\"][0]['delta'].to_dict_recursive())"
]
},
{
"cell_type": "markdown",
"id": "0b2a076b",
"metadata": {},
"source": [
"LangChain OpenAI wrapper call"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "9521218c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'role': 'assistant', 'content': ''}\n",
"{'content': 'Hello'}\n",
"{'content': '!'}\n",
"{'content': ' How'}\n",
"{'content': ' can'}\n",
"{'content': ' I'}\n",
"{'content': ' assist'}\n",
"{'content': ' you'}\n",
"{'content': ' today'}\n",
"{'content': '?'}\n",
"{}\n"
]
}
],
"source": [
"for c in lc_openai.ChatCompletion.create(\n",
" messages = messages,\n",
" model=\"gpt-3.5-turbo\", \n",
" temperature=0,\n",
" stream=True\n",
"):\n",
" print(c[\"choices\"][0]['delta'])"
]
},
{
"cell_type": "markdown",
"id": "0fc39750",
"metadata": {},
"source": [
"Swapping out model providers"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "68f0214e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'role': 'assistant', 'content': ' Hello'}\n",
"{'content': '!'}\n",
"{}\n"
]
}
],
"source": [
"for c in lc_openai.ChatCompletion.create(\n",
" messages = messages,\n",
" model=\"claude-2\", \n",
" temperature=0,\n",
" stream=True,\n",
" provider=\"ChatAnthropic\",\n",
"):\n",
" print(c[\"choices\"][0]['delta'])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -22,7 +22,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 1,
"id": "466b65b3",
"metadata": {},
"outputs": [],
@@ -171,9 +171,7 @@
"cell_type": "code",
"execution_count": 9,
"id": "decf7710",
"metadata": {
"scrolled": false
},
"metadata": {},
"outputs": [
{
"data": {
@@ -202,7 +200,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 10,
"id": "f799664d",
"metadata": {},
"outputs": [],
@@ -347,7 +345,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 12,
"id": "5d3d8ffe",
"metadata": {},
"outputs": [],
@@ -368,7 +366,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 2,
"id": "33be32af",
"metadata": {},
"outputs": [],
@@ -380,7 +378,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 3,
"id": "df3f3fa2",
"metadata": {},
"outputs": [],
@@ -424,9 +422,7 @@
"cell_type": "code",
"execution_count": 18,
"id": "f3040b0c",
"metadata": {
"scrolled": false
},
"metadata": {},
"outputs": [
{
"name": "stderr",
@@ -477,9 +473,7 @@
"cell_type": "code",
"execution_count": 20,
"id": "7ee8b2d4",
"metadata": {
"scrolled": false
},
"metadata": {},
"outputs": [
{
"name": "stderr",
@@ -515,7 +509,7 @@
},
{
"cell_type": "code",
"execution_count": 66,
"execution_count": 4,
"id": "3f30c348",
"metadata": {},
"outputs": [],
@@ -526,7 +520,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 5,
"id": "64ab1dbf",
"metadata": {},
"outputs": [],
@@ -544,7 +538,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 6,
"id": "7d628c97",
"metadata": {},
"outputs": [],
@@ -559,7 +553,7 @@
},
{
"cell_type": "code",
"execution_count": 68,
"execution_count": 7,
"id": "f60a5d0f",
"metadata": {},
"outputs": [],
@@ -572,7 +566,7 @@
},
{
"cell_type": "code",
"execution_count": 69,
"execution_count": 8,
"id": "7d007db6",
"metadata": {},
"outputs": [],
@@ -589,25 +583,29 @@
},
{
"cell_type": "code",
"execution_count": 70,
"execution_count": 16,
"id": "5c32cc89",
"metadata": {},
"outputs": [],
"source": [
"conversational_qa_chain = RunnableMap({\n",
" \"standalone_question\": {\n",
" \"question\": lambda x: x[\"question\"],\n",
" \"chat_history\": lambda x: _format_chat_history(x['chat_history'])\n",
" } | CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),\n",
"}) | {\n",
"_inputs = RunnableMap(\n",
" {\n",
" \"standalone_question\": {\n",
" \"question\": lambda x: x[\"question\"],\n",
" \"chat_history\": lambda x: _format_chat_history(x['chat_history'])\n",
" } | CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),\n",
" }\n",
")\n",
"_context = {\n",
" \"context\": itemgetter(\"standalone_question\") | retriever | _combine_documents,\n",
" \"question\": lambda x: x[\"standalone_question\"]\n",
"} | ANSWER_PROMPT | ChatOpenAI()"
"}\n",
"conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 71,
"execution_count": 17,
"id": "135c8205",
"metadata": {},
"outputs": [
@@ -624,7 +622,7 @@
"AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False)"
]
},
"execution_count": 71,
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -638,7 +636,7 @@
},
{
"cell_type": "code",
"execution_count": 62,
"execution_count": 15,
"id": "424e7e7a",
"metadata": {},
"outputs": [
@@ -655,7 +653,7 @@
"AIMessage(content='Harrison worked at Kensho.', additional_kwargs={}, example=False)"
]
},
"execution_count": 62,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@@ -667,6 +665,149 @@
"})"
]
},
{
"cell_type": "markdown",
"id": "c5543183",
"metadata": {},
"source": [
"### With Memory and returning source documents\n",
"\n",
"This shows how to use memory with the above. For memory, we need to manage that outside at the memory. For returning the retrieved documents, we just need to pass them through all the way."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "e31dd17c",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationBufferMemory"
]
},
{
"cell_type": "code",
"execution_count": 44,
"id": "d4bffe94",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferMemory(return_messages=True, output_key=\"answer\", input_key=\"question\")"
]
},
{
"cell_type": "code",
"execution_count": 45,
"id": "733be985",
"metadata": {},
"outputs": [],
"source": [
"# First we add a step to load memory\n",
"# This needs to be a RunnableMap because its the first input\n",
"loaded_memory = RunnableMap(\n",
" {\n",
" \"question\": itemgetter(\"question\"),\n",
" \"memory\": memory.load_memory_variables,\n",
" }\n",
")\n",
"# Next we add a step to expand memory into the variables\n",
"expanded_memory = {\n",
" \"question\": itemgetter(\"question\"),\n",
" \"chat_history\": lambda x: x[\"memory\"][\"history\"]\n",
"}\n",
"\n",
"# Now we calculate the standalone question\n",
"standalone_question = {\n",
" \"standalone_question\": {\n",
" \"question\": lambda x: x[\"question\"],\n",
" \"chat_history\": lambda x: _format_chat_history(x['chat_history'])\n",
" } | CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),\n",
"}\n",
"# Now we retrieve the documents\n",
"retrieved_documents = {\n",
" \"docs\": itemgetter(\"standalone_question\") | retriever,\n",
" \"question\": lambda x: x[\"standalone_question\"]\n",
"}\n",
"# Now we construct the inputs for the final prompt\n",
"final_inputs = {\n",
" \"context\": lambda x: _combine_documents(x[\"docs\"]),\n",
" \"question\": itemgetter(\"question\")\n",
"}\n",
"# And finally, we do the part that returns the answers\n",
"answer = {\n",
" \"answer\": final_inputs | ANSWER_PROMPT | ChatOpenAI(),\n",
" \"docs\": itemgetter(\"docs\"),\n",
"}\n",
"# And now we put it all together!\n",
"final_chain = loaded_memory | expanded_memory | standalone_question | retrieved_documents | answer"
]
},
{
"cell_type": "code",
"execution_count": 46,
"id": "806e390c",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Number of requested results 4 is greater than number of elements in index 1, updating n_results = 1\n"
]
},
{
"data": {
"text/plain": [
"{'answer': AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False),\n",
" 'docs': [Document(page_content='harrison worked at kensho', metadata={})]}"
]
},
"execution_count": 46,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"question\": \"where did harrison work?\"}\n",
"result = final_chain.invoke(inputs)\n",
"result"
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "977399fd",
"metadata": {},
"outputs": [],
"source": [
"# Note that the memory does not save automatically\n",
"# This will be improved in the future\n",
"# For now you need to save it yourself\n",
"memory.save_context(inputs, {\"answer\": result[\"answer\"].content})"
]
},
{
"cell_type": "code",
"execution_count": 48,
"id": "f94f7de4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [HumanMessage(content='where did harrison work?', additional_kwargs={}, example=False),\n",
" AIMessage(content='Harrison was employed at Kensho.', additional_kwargs={}, example=False)]}"
]
},
"execution_count": 48,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "markdown",
"id": "0f2bf8d3",
@@ -1391,10 +1532,299 @@
"response"
]
},
{
"cell_type": "markdown",
"id": "4927a727-b4c8-453c-8c83-bd87b4fcac14",
"metadata": {},
"source": [
"## Moderation\n",
"\n",
"This shows how to add in moderation (or other safeguards) around your LLM application."
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "4f5f6449-940a-4f5c-97c0-39b71c3e2a68",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import OpenAIModerationChain\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "fcb8312b-7e7a-424f-a3ec-76738c9a9d21",
"metadata": {},
"outputs": [],
"source": [
"moderate = OpenAIModerationChain()"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "b24b9148-f6b0-4091-8ea8-d3fb281bd950",
"metadata": {},
"outputs": [],
"source": [
"model = OpenAI()\n",
"prompt = ChatPromptTemplate.from_messages([\n",
" (\"system\", \"repeat after me: {input}\")\n",
"])"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "1c8ed87c-9ca6-4559-bf60-d40e94a0af08",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "5256b9bd-381a-42b0-bfa8-7e6d18f853cb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nYou are stupid.'"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"input\": \"you are stupid\"})"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "fe6e3b33-dc9a-49d5-b194-ba750c58a628",
"metadata": {},
"outputs": [],
"source": [
"moderated_chain = chain | moderate"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "d8ba0cbd-c739-4d23-be9f-6ae092bd5ffb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'input': '\\n\\nYou are stupid.',\n",
" 'output': \"Text was found that violates OpenAI's content policy.\"}"
]
},
"execution_count": 37,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"moderated_chain.invoke({\"input\": \"you are stupid\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "179d3c03",
"id": "a0a85ba4-f782-47b8-b16f-8b7a61d6dab7",
"metadata": {},
"outputs": [],
"source": [
"## Conversational Retrieval With Memory"
]
},
{
"cell_type": "markdown",
"id": "92c87dd8-bb6f-4f32-a30d-8f5459ce6265",
"metadata": {},
"source": [
"## Fallbacks\n",
"\n",
"With LCEL you can easily introduce fallbacks for any Runnable component, like an LLM."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1b1cb744-31fc-4261-ab25-65fe1fcad559",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='To get to the other side.', additional_kwargs={}, example=False)"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"bad_llm = ChatOpenAI(model_name=\"gpt-fake\")\n",
"good_llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\n",
"llm = bad_llm.with_fallbacks([good_llm])\n",
"\n",
"llm.invoke(\"Why did the the chicken cross the road?\")"
]
},
{
"cell_type": "markdown",
"id": "b8cf3982-03f6-49b3-8ff5-7cd12444f19c",
"metadata": {},
"source": [
"Looking at the trace, we can see that the first model failed but the second succeeded, so we still got an output: https://smith.langchain.com/public/dfaf0bf6-d86d-43e9-b084-dd16a56df15c/r\n",
"\n",
"We can add an arbitrary sequence of fallbacks, which will be executed in order:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "31819be0-7f40-4e67-b5ab-61340027b948",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='To get to the other side.', additional_kwargs={}, example=False)"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm = bad_llm.with_fallbacks([bad_llm, bad_llm, good_llm])\n",
"\n",
"llm.invoke(\"Why did the the chicken cross the road?\")"
]
},
{
"cell_type": "markdown",
"id": "acad6e88-8046-450e-b005-db7e50f33b80",
"metadata": {},
"source": [
"Trace: https://smith.langchain.com/public/c09efd01-3184-4369-a225-c9da8efcaf47/r\n",
"\n",
"We can continue to use our Runnable with fallbacks the same way we use any Runnable, mean we can include it in sequences:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "bab114a1-bb93-4b7e-a639-e7e00f21aebc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='To show off its incredible jumping skills! Kangaroos are truly amazing creatures.', additional_kwargs={}, example=False)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You're a nice assistant who always includes a compliment in your response\"),\n",
" (\"human\", \"Why did the {animal} cross the road\"),\n",
" ]\n",
")\n",
"chain = prompt | llm\n",
"chain.invoke({\"animal\": \"kangaroo\"})"
]
},
{
"cell_type": "markdown",
"id": "58340afa-8187-4ffe-9bd2-7912fb733a15",
"metadata": {},
"source": [
"Trace: https://smith.langchain.com/public/ba03895f-f8bd-4c70-81b7-8b930353eabd/r\n",
"\n",
"Note, since every sequence of Runnables is itself a Runnable, we can create fallbacks for whole Sequences. We can also continue using the full interface, including asynchronous calls, batched calls, and streams:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "45aa3170-b2e6-430d-887b-bd879048060a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[\"\\n\\nAnswer: The rabbit crossed the road to get to the other side. That's quite clever of him!\",\n",
" '\\n\\nAnswer: The turtle crossed the road to get to the other side. You must be pretty clever to come up with that riddle!']"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"\n",
"chat_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You're a nice assistant who always includes a compliment in your response\"),\n",
" (\"human\", \"Why did the {animal} cross the road\"),\n",
" ]\n",
")\n",
"chat_model = ChatOpenAI(model_name=\"gpt-fake\")\n",
"\n",
"prompt_template = \"\"\"Instructions: You should always include a compliment in your response.\n",
"\n",
"Question: Why did the {animal} cross the road?\"\"\"\n",
"prompt = PromptTemplate.from_template(prompt_template)\n",
"llm = OpenAI()\n",
"\n",
"bad_chain = chat_prompt | chat_model\n",
"good_chain = prompt | llm\n",
"chain = bad_chain.with_fallbacks([good_chain])\n",
"await chain.abatch([{\"animal\": \"rabbit\"}, {\"animal\": \"turtle\"}])"
]
},
{
"cell_type": "markdown",
"id": "af6731c6-0c73-4b1d-a433-6e8f6ecce2bb",
"metadata": {},
"source": [
"Traces: \n",
"1. https://smith.langchain.com/public/ccd73236-9ae5-48a6-94b5-41210be18a46/r\n",
"2. https://smith.langchain.com/public/f43f608e-075c-45c7-bf73-b64e4d3f3082/r"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d2fe1fe-506b-4ee5-8056-8b9df801765f",
"metadata": {},
"outputs": [],
"source": []

View File

@@ -19,9 +19,22 @@
"- `ainvoke`: call the chain on an input async\n",
"- `abatch`: call the chain on a list of inputs async\n",
"\n",
"The type of the input varies by component. For a prompt it is a dictionary, for a retriever it is a single string, for a model either a single string, a list of chat messages, or a PromptValue.\n",
"The type of the input varies by component:\n",
"\n",
"The output type also varies by component. For an LLM it is a string, for a ChatModel it's a ChatMessage, for a prompt it's a PromptValue, for a retriever it's a list of documents.\n",
"| Component | Input Type |\n",
"| --- | --- |\n",
"|Prompt|Dictionary|\n",
"|Retriever|Single string|\n",
"|Model| Single string, list of chat messages or a PromptValue|\n",
"\n",
"The output type also varies by component:\n",
"\n",
"| Component | Output Type |\n",
"| --- | --- |\n",
"| LLM | String |\n",
"| ChatModel | ChatMessage |\n",
"| Prompt | PromptValue |\n",
"| Retriever | List of documents |\n",
"\n",
"Let's take a look at these methods! To do so, we'll create a super simple PromptTemplate + ChatModel chain."
]
@@ -95,7 +108,7 @@
],
"source": [
"for s in chain.stream({\"topic\": \"bears\"}):\n",
" print(s.content, end=\"\")"
" print(s.content, end=\"\", flush=True)"
]
},
{
@@ -183,7 +196,7 @@
],
"source": [
"async for s in chain.astream({\"topic\": \"bears\"}):\n",
" print(s.content, end=\"\")"
" print(s.content, end=\"\", flush=True)"
]
},
{

View File

@@ -147,7 +147,7 @@
" api_key=os.environ[\"ARGILLA_API_KEY\"],\n",
")\n",
"\n",
"dataset.push_to_argilla(\"langchain-dataset\")"
"dataset.push_to_argilla(\"langchain-dataset\");"
]
},
{

View File

@@ -0,0 +1,382 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": true,
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"# Label Studio\n",
"\n",
"<div>\n",
"<img src=\"https://labelstudio-pub.s3.amazonaws.com/lc/open-source-data-labeling-platform.png\" width=\"400\"/>\n",
"</div>\n",
"\n",
"Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.\n",
"\n",
"In this guide, you will learn how to connect a LangChain pipeline to Label Studio to:\n",
"\n",
"- Aggregate all input prompts, conversations, and responses in a single LabelStudio project. This consolidates all the data in one place for easier labeling and analysis.\n",
"- Refine prompts and responses to create a dataset for supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) scenarios. The labeled data can be used to further train the LLM to improve its performance.\n",
"- Evaluate model responses through human feedback. LabelStudio provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration."
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Installation and setup"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"First install latest versions of Label Studio and Label Studio API client:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"!pip install -U label-studio label-studio-sdk openai"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"Next, run `label-studio` on the command line to start the local LabelStudio instance at `http://localhost:8080`. See the [Label Studio installation guide](https://labelstud.io/guide/install) for more options."
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"You'll need a token to make API calls.\n",
"\n",
"Open your LabelStudio instance in your browser, go to `Account & Settings > Access Token` and copy the key.\n",
"\n",
"Set environment variables with your LabelStudio URL, API key and OpenAI API key:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ['LABEL_STUDIO_URL'] = '<YOUR-LABEL-STUDIO-URL>' # e.g. http://localhost:8080\n",
"os.environ['LABEL_STUDIO_API_KEY'] = '<YOUR-LABEL-STUDIO-API-KEY>'\n",
"os.environ['OPENAI_API_KEY'] = '<YOUR-OPENAI-API-KEY>'"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Collecting LLMs prompts and responses"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The data used for labeling is stored in projects within Label Studio. Every project is identified by an XML configuration that details the specifications for input and output data. \n",
"\n",
"Create a project that takes human input in text format and outputs an editable LLM response in a text area:\n",
"\n",
"```xml\n",
"<View>\n",
"<Style>\n",
" .prompt-box {\n",
" background-color: white;\n",
" border-radius: 10px;\n",
" box-shadow: 0px 4px 6px rgba(0, 0, 0, 0.1);\n",
" padding: 20px;\n",
" }\n",
"</Style>\n",
"<View className=\"root\">\n",
" <View className=\"prompt-box\">\n",
" <Text name=\"prompt\" value=\"$prompt\"/>\n",
" </View>\n",
" <TextArea name=\"response\" toName=\"prompt\"\n",
" maxSubmissions=\"1\" editable=\"true\"\n",
" required=\"true\"/>\n",
"</View>\n",
"<Header value=\"Rate the response:\"/>\n",
"<Rating name=\"rating\" toName=\"prompt\"/>\n",
"</View>\n",
"```\n",
"\n",
"1. To create a project in Label Studio, click on the \"Create\" button. \n",
"2. Enter a name for your project in the \"Project Name\" field, such as `My Project`.\n",
"3. Navigate to `Labeling Setup > Custom Template` and paste the XML configuration provided above."
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"You can collect input LLM prompts and output responses in a LabelStudio project, connecting it via `LabelStudioCallbackHandler`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.callbacks import LabelStudioCallbackHandler\n",
"\n",
"llm = OpenAI(\n",
" temperature=0,\n",
" callbacks=[\n",
" LabelStudioCallbackHandler(\n",
" project_name=\"My Project\"\n",
" )]\n",
")\n",
"print(llm(\"Tell me a joke\"))"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"In the Label Studio, open `My Project`. You will see the prompts, responses, and metadata like the model name. "
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Collecting Chat model Dialogues"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also track and display full chat dialogues in LabelStudio, with the ability to rate and modify the last response:\n",
"\n",
"1. Open Label Studio and click on the \"Create\" button.\n",
"2. Enter a name for your project in the \"Project Name\" field, such as `New Project with Chat`.\n",
"3. Navigate to Labeling Setup > Custom Template and paste the following XML configuration:\n",
"\n",
"```xml\n",
"<View>\n",
"<View className=\"root\">\n",
" <Paragraphs name=\"dialogue\"\n",
" value=\"$prompt\"\n",
" layout=\"dialogue\"\n",
" textKey=\"content\"\n",
" nameKey=\"role\"\n",
" granularity=\"sentence\"/>\n",
" <Header value=\"Final response:\"/>\n",
" <TextArea name=\"response\" toName=\"dialogue\"\n",
" maxSubmissions=\"1\" editable=\"true\"\n",
" required=\"true\"/>\n",
"</View>\n",
"<Header value=\"Rate the response:\"/>\n",
"<Rating name=\"rating\" toName=\"dialogue\"/>\n",
"</View>\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema import HumanMessage, SystemMessage\n",
"from langchain.callbacks import LabelStudioCallbackHandler\n",
"\n",
"chat_llm = ChatOpenAI(callbacks=[\n",
" LabelStudioCallbackHandler(\n",
" mode=\"chat\",\n",
" project_name=\"New Project with Chat\",\n",
" )\n",
"])\n",
"llm_results = chat_llm([\n",
" SystemMessage(content=\"Always use a lot of emojis\"),\n",
" HumanMessage(content=\"Tell me a joke\")\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In Label Studio, open \"New Project with Chat\". Click on a created task to view dialog history and edit/annotate responses."
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Custom Labeling Configuration"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"You can modify the default labeling configuration in LabelStudio to add more target labels like response sentiment, relevance, and many [other types annotator's feedback](https://labelstud.io/tags/).\n",
"\n",
"New labeling configuration can be added from UI: go to `Settings > Labeling Interface` and set up a custom configuration with additional tags like `Choices` for sentiment or `Rating` for relevance. Keep in mind that [`TextArea` tag](https://labelstud.io/tags/textarea) should be presented in any configuration to display the LLM responses.\n",
"\n",
"Alternatively, you can specify the labeling configuration on the initial call before project creation:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"ls = LabelStudioCallbackHandler(project_config='''\n",
"<View>\n",
"<Text name=\"prompt\" value=\"$prompt\"/>\n",
"<TextArea name=\"response\" toName=\"prompt\"/>\n",
"<TextArea name=\"user_feedback\" toName=\"prompt\"/>\n",
"<Rating name=\"rating\" toName=\"prompt\"/>\n",
"<Choices name=\"sentiment\" toName=\"prompt\">\n",
" <Choice value=\"Positive\"/>\n",
" <Choice value=\"Negative\"/>\n",
"</Choices>\n",
"</View>\n",
"''')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that if the project doesn't exist, it will be created with the specified labeling configuration."
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Other parameters"
]
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"The `LabelStudioCallbackHandler` accepts several optional parameters:\n",
"\n",
"- **api_key** - Label Studio API key. Overrides environmental variable `LABEL_STUDIO_API_KEY`.\n",
"- **url** - Label Studio URL. Overrides `LABEL_STUDIO_URL`, default `http://localhost:8080`.\n",
"- **project_id** - Existing Label Studio project ID. Overrides `LABEL_STUDIO_PROJECT_ID`. Stores data in this project.\n",
"- **project_name** - Project name if project ID not specified. Creates a new project. Default is `\"LangChain-%Y-%m-%d\"` formatted with the current date.\n",
"- **project_config** - [custom labeling configuration](#custom-labeling-configuration)\n",
"- **mode**: use this shortcut to create target configuration from scratch:\n",
" - `\"prompt\"` - Single prompt, single response. Default.\n",
" - `\"chat\"` - Multi-turn chat mode.\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "labelops",
"language": "python",
"name": "labelops"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -71,3 +71,6 @@ or any other local ENV management tool.
Currently `StreamlitCallbackHandler` is geared towards use with a LangChain Agent Executor. Support for additional agent types,
use directly with Chains, etc will be added in the future.
You may also be interested in using
[StreamlitChatMessageHistory](/docs/integrations/memory/streamlit_chat_message_history) for LangChain.

View File

@@ -0,0 +1,225 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "642fd21c-600a-47a1-be96-6e1438b421a9",
"metadata": {},
"source": [
"# Anyscale\n",
"\n",
"This notebook demonstrates the use of `langchain.chat_models.ChatAnyscale` for [Anyscale Endpoints](https://endpoints.anyscale.com/).\n",
"\n",
"* Set `ANYSCALE_API_KEY` environment variable\n",
"* or use the `anyscale_api_key` keyword argument"
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"# !pip install openai"
],
"metadata": {
"collapsed": false
},
"id": "d00d850917865298"
},
{
"cell_type": "code",
"execution_count": 1,
"id": "72340871-ae2f-415f-b399-0777d32dc379",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"ANYSCALE_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "5d7fc704-3ea0-4c35-96e7-89fcae6c73fa",
"metadata": {},
"source": [
"# Let's try out each model offered on Anyscale Endpoints"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0dc9428d-4217-47d2-97de-f784b1764186",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"dict_keys(['meta-llama/Llama-2-70b-chat-hf', 'meta-llama/Llama-2-7b-chat-hf', 'meta-llama/Llama-2-13b-chat-hf'])\n"
]
}
],
"source": [
"from langchain.chat_models import ChatAnyscale\n",
"\n",
"chats = {\n",
" model: ChatAnyscale(model_name=model, temperature=1.0)\n",
" for model in ChatAnyscale.get_available_models()\n",
"}\n",
"\n",
"print(chats.keys())"
]
},
{
"cell_type": "markdown",
"id": "7c4f124a-eaf7-4d78-a2c0-b0aa23fb25c4",
"metadata": {},
"source": [
"# We can use async methods and other stuff supported by ChatOpenAI\n",
"\n",
"This way, the three requests will only take as long as the longest individual request."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1f94f5d2-569e-4a2c-965e-de53c2845fbb",
"metadata": {},
"outputs": [],
"source": [
"import asyncio\n",
"\n",
"from langchain.schema import SystemMessage, HumanMessage\n",
"\n",
"messages = [\n",
" SystemMessage(\n",
" content=\"You are a helpful AI that shares everything you know.\"\n",
" ),\n",
" HumanMessage(\n",
" content=\"Tell me technical facts about yourself. Are you a transformer model? How many billions of parameters do you have?\"\n",
" ),\n",
"]\n",
"\n",
"async def get_msgs():\n",
" tasks = [\n",
" chat.apredict_messages(messages)\n",
" for chat in chats.values()\n",
" ]\n",
" responses = await asyncio.gather(*tasks)\n",
" return dict(zip(chats.keys(), responses))"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b2ced871-869a-4ca6-a2ec-6bfececdf7da",
"metadata": {},
"outputs": [],
"source": [
"import nest_asyncio\n",
"\n",
"nest_asyncio.apply()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "bc605fa5-9501-470d-a6c9-cd868d2145ef",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\tmeta-llama/Llama-2-70b-chat-hf\n",
"\n",
"Greetings! I'm just an AI, I don't have a personal identity like humans do, but I'm here to help you with any questions you have.\n",
"\n",
"I'm a large language model, which means I'm trained on a large corpus of text data to generate language outputs that are coherent and natural-sounding. My architecture is based on a transformer model, which is a type of neural network that's particularly well-suited for natural language processing tasks.\n",
"\n",
"As for my parameters, I have a few billion parameters, but I don't have access to the exact number as it's not relevant to my functioning. My training data includes a vast amount of text from various sources, including books, articles, and websites, which I use to learn patterns and relationships in language.\n",
"\n",
"I'm designed to be a helpful tool for a variety of tasks, such as answering questions, providing information, and generating text. I'm constantly learning and improving my abilities through machine learning algorithms and feedback from users like you.\n",
"\n",
"I hope this helps! Is there anything else you'd like to know about me or my capabilities?\n",
"\n",
"---\n",
"\n",
"\tmeta-llama/Llama-2-7b-chat-hf\n",
"\n",
"Ah, a fellow tech enthusiast! *adjusts glasses* I'm glad to share some technical details about myself. 🤓\n",
"Indeed, I'm a transformer model, specifically a BERT-like language model trained on a large corpus of text data. My architecture is based on the transformer framework, which is a type of neural network designed for natural language processing tasks. 🏠\n",
"As for the number of parameters, I have approximately 340 million. *winks* That's a pretty hefty number, if I do say so myself! These parameters allow me to learn and represent complex patterns in language, such as syntax, semantics, and more. 🤔\n",
"But don't ask me to do math in my head I'm a language model, not a calculating machine! 😅 My strengths lie in understanding and generating human-like text, so feel free to chat with me anytime you'd like. 💬\n",
"Now, do you have any more technical questions for me? Or would you like to engage in a nice chat? 😊\n",
"\n",
"---\n",
"\n",
"\tmeta-llama/Llama-2-13b-chat-hf\n",
"\n",
"Hello! As a friendly and helpful AI, I'd be happy to share some technical facts about myself.\n",
"\n",
"I am a transformer-based language model, specifically a variant of the BERT (Bidirectional Encoder Representations from Transformers) architecture. BERT was developed by Google in 2018 and has since become one of the most popular and widely-used AI language models.\n",
"\n",
"Here are some technical details about my capabilities:\n",
"\n",
"1. Parameters: I have approximately 340 million parameters, which are the numbers that I use to learn and represent language. This is a relatively large number of parameters compared to some other languages models, but it allows me to learn and understand complex language patterns and relationships.\n",
"2. Training: I was trained on a large corpus of text data, including books, articles, and other sources of written content. This training allows me to learn about the structure and conventions of language, as well as the relationships between words and phrases.\n",
"3. Architectures: My architecture is based on the transformer model, which is a type of neural network that is particularly well-suited for natural language processing tasks. The transformer model uses self-attention mechanisms to allow the model to \"attend\" to different parts of the input text, allowing it to capture long-range dependencies and contextual relationships.\n",
"4. Precision: I am capable of generating text with high precision and accuracy, meaning that I can produce text that is close to human-level quality in terms of grammar, syntax, and coherence.\n",
"5. Generative capabilities: In addition to being able to generate text based on prompts and questions, I am also capable of generating text based on a given topic or theme. This allows me to create longer, more coherent pieces of text that are organized around a specific idea or concept.\n",
"\n",
"Overall, I am a powerful and versatile language model that is capable of a wide range of natural language processing tasks. I am constantly learning and improving, and I am here to help answer any questions you may have!\n",
"\n",
"---\n",
"\n",
"CPU times: user 371 ms, sys: 15.5 ms, total: 387 ms\n",
"Wall time: 12 s\n"
]
}
],
"source": [
"%%time\n",
"\n",
"response_dict = asyncio.run(get_msgs())\n",
"\n",
"for model_name, response in response_dict.items():\n",
" print(f'\\t{model_name}')\n",
" print()\n",
" print(response.content)\n",
" print('\\n---\\n')"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -74,6 +74,124 @@
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "f27fa24d",
"metadata": {},
"source": [
"## Model Version\n",
"Azure OpenAI responses contain `model` property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the version of the model, which is set on the deplyoment in Azure. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with `OpenAICallbackHandler`.\n",
"\n",
"To solve this problem, you can pass `model_version` parameter to `AzureChatOpenAI` class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0531798a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.callbacks import get_openai_callback"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "3fd97dfc",
"metadata": {},
"outputs": [],
"source": [
"BASE_URL = \"https://{endpoint}.openai.azure.com\"\n",
"API_KEY = \"...\"\n",
"DEPLOYMENT_NAME = \"gpt-35-turbo\" # in Azure, this deployment has version 0613 - input and output tokens are counted separately"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "aceddb72",
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total Cost (USD): $0.000054\n"
]
}
],
"source": [
"model = AzureChatOpenAI(\n",
" openai_api_base=BASE_URL,\n",
" openai_api_version=\"2023-05-15\",\n",
" deployment_name=DEPLOYMENT_NAME,\n",
" openai_api_key=API_KEY,\n",
" openai_api_type=\"azure\",\n",
")\n",
"with get_openai_callback() as cb:\n",
" model(\n",
" [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
" ]\n",
" )\n",
" print(f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\") # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used\n"
]
},
{
"cell_type": "markdown",
"id": "2e61eefd",
"metadata": {},
"source": [
"We can provide the model version to `AzureChatOpenAI` constructor. It will get appended to the model name returned by Azure OpenAI and cost will be counted correctly."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "8d5e54e9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total Cost (USD): $0.000044\n"
]
}
],
"source": [
"model0613 = AzureChatOpenAI(\n",
" openai_api_base=BASE_URL,\n",
" openai_api_version=\"2023-05-15\",\n",
" deployment_name=DEPLOYMENT_NAME,\n",
" openai_api_key=API_KEY,\n",
" openai_api_type=\"azure\",\n",
" model_version=\"0613\"\n",
")\n",
"with get_openai_callback() as cb:\n",
" model0613(\n",
" [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
" ]\n",
" )\n",
" print(f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "99682534",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -92,7 +210,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.8.10"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,226 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1f3a5ebf",
"metadata": {},
"source": [
"# Airbyte CDK"
]
},
{
"cell_type": "markdown",
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
"metadata": {},
"source": [
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
"\n",
"A lot of source connectors are implemented using the [Airbyte CDK](https://docs.airbyte.com/connector-development/cdk-python/). This loader allows to run any of these connectors and return the data as documents."
]
},
{
"cell_type": "markdown",
"id": "3b06fbde",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "e3e9dc79",
"metadata": {},
"source": [
"First, you need to install the `airbyte-cdk` python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4d35e4e0",
"metadata": {},
"outputs": [],
"source": [
"#!pip install airbyte-cdk"
]
},
{
"cell_type": "markdown",
"id": "085aa658",
"metadata": {},
"source": [
"Then, either install an existing connector from the [Airbyte Github repository](https://github.com/airbytehq/airbyte/tree/master/airbyte-integrations/connectors) or create your own connector using the [Airbyte CDK](https://docs.airbyte.io/connector-development/connector-development).\n",
"\n",
"For example, to install the Github connector, run"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f6d04ef4",
"metadata": {},
"outputs": [],
"source": [
"#!pip install \"source_github@git+https://github.com/airbytehq/airbyte.git@master#subdirectory=airbyte-integrations/connectors/source-github\""
]
},
{
"cell_type": "markdown",
"id": "36069b74",
"metadata": {},
"source": [
"Some sources are also published as regular packages on PyPI"
]
},
{
"cell_type": "markdown",
"id": "ae855210",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "markdown",
"id": "02208f52",
"metadata": {},
"source": [
"Now you can create an `AirbyteCDKLoader` based on the imported source. It takes a `config` object that's passed to the connector. You also have to pick the stream you want to retrieve records from by name (`stream_name`). Check the connectors documentation page and spec definition for more information on the config object and available streams. For the Github connectors these are:\n",
"* [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-github/source_github/spec.json](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-github/source_github/spec.json).\n",
"* [https://docs.airbyte.com/integrations/sources/github/](https://docs.airbyte.com/integrations/sources/github/)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89a99e58",
"metadata": {},
"outputs": [],
"source": [
"\n",
"from langchain.document_loaders.airbyte import AirbyteCDKLoader\n",
"from source_github.source import SourceGithub # plug in your own source here\n",
"\n",
"config = {\n",
" # your github configuration\n",
" \"credentials\": {\n",
" \"api_url\": \"api.github.com\",\n",
" \"personal_access_token\": \"<token>\"\n",
" },\n",
" \"repository\": \"<repo>\",\n",
" \"start_date\": \"<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>\"\n",
"}\n",
"\n",
"issues_loader = AirbyteCDKLoader(source_class=SourceGithub, config=config, stream_name=\"issues\")"
]
},
{
"cell_type": "markdown",
"id": "2cea23fc",
"metadata": {},
"source": [
"Now you can load documents the usual way"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dae75cdb",
"metadata": {},
"outputs": [],
"source": [
"docs = issues_loader.load()"
]
},
{
"cell_type": "markdown",
"id": "4a93dc2a",
"metadata": {},
"source": [
"As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1782db09",
"metadata": {},
"outputs": [],
"source": [
"docs_iterator = issues_loader.lazy_load()"
]
},
{
"cell_type": "markdown",
"id": "3a124086",
"metadata": {},
"source": [
"Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5671395d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"\n",
"def handle_record(record, id):\n",
" return Document(page_content=record.data[\"title\"] + \"\\n\" + (record.data[\"body\"] or \"\"), metadata=record.data)\n",
"\n",
"issues_loader = AirbyteCDKLoader(source_class=SourceGithub, config=config, stream_name=\"issues\", record_handler=handle_record)\n",
"\n",
"docs = issues_loader.load()"
]
},
{
"cell_type": "markdown",
"id": "223eb8bc",
"metadata": {},
"source": [
"## Incremental loads\n",
"\n",
"Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.\n",
"\n",
"To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7061e735",
"metadata": {},
"outputs": [],
"source": [
"last_state = issues_loader.last_state # store safely\n",
"\n",
"incremental_issue_loader = AirbyteCDKLoader(source_class=SourceGithub, config=config, stream_name=\"issues\", state=last_state)\n",
"\n",
"new_docs = incremental_issue_loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,206 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1f3a5ebf",
"metadata": {},
"source": [
"# Airbyte Gong"
]
},
{
"cell_type": "markdown",
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
"metadata": {},
"source": [
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
"\n",
"This loader exposes the Gong connector as a document loader, allowing you to load various Gong objects as documents."
]
},
{
"cell_type": "markdown",
"id": "6847a40c",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "3b06fbde",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "e3e9dc79",
"metadata": {},
"source": [
"First, you need to install the `airbyte-source-gong` python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4d35e4e0",
"metadata": {},
"outputs": [],
"source": [
"#!pip install airbyte-source-gong"
]
},
{
"cell_type": "markdown",
"id": "ae855210",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "markdown",
"id": "02208f52",
"metadata": {},
"source": [
"Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/gong/) for details about how to configure the reader.\n",
"The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-gong/source_gong/spec.yaml](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-gong/source_gong/spec.yaml).\n",
"\n",
"The general shape looks like this:\n",
"```python\n",
"{\n",
" \"access_key\": \"<access key name>\",\n",
" \"access_key_secret\": \"<access key secret>\",\n",
" \"start_date\": \"<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>\",\n",
"}\n",
"```\n",
"\n",
"By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89a99e58",
"metadata": {},
"outputs": [],
"source": [
"\n",
"from langchain.document_loaders.airbyte import AirbyteGongLoader\n",
"\n",
"config = {\n",
" # your gong configuration\n",
"}\n",
"\n",
"loader = AirbyteGongLoader(config=config, stream_name=\"calls\") # check the documentation linked above for a list of all streams"
]
},
{
"cell_type": "markdown",
"id": "2cea23fc",
"metadata": {},
"source": [
"Now you can load documents the usual way"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dae75cdb",
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "4a93dc2a",
"metadata": {},
"source": [
"As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1782db09",
"metadata": {},
"outputs": [],
"source": [
"docs_iterator = loader.lazy_load()"
]
},
{
"cell_type": "markdown",
"id": "3a124086",
"metadata": {},
"source": [
"Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To process documents, create a class inheriting from the base loader and implement the `_handle_records` method yourself:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5671395d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"\n",
"def handle_record(record, id):\n",
" return Document(page_content=record.data[\"title\"], metadata=record.data)\n",
"\n",
"loader = AirbyteGongLoader(config=config, record_handler=handle_record, stream_name=\"calls\")\n",
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "223eb8bc",
"metadata": {},
"source": [
"## Incremental loads\n",
"\n",
"Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.\n",
"\n",
"To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7061e735",
"metadata": {},
"outputs": [],
"source": [
"last_state = loader.last_state # store safely\n",
"\n",
"incremental_loader = AirbyteGongLoader(config=config, stream_name=\"calls\", state=last_state)\n",
"\n",
"new_docs = incremental_loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,208 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1f3a5ebf",
"metadata": {},
"source": [
"# Airbyte Hubspot"
]
},
{
"cell_type": "markdown",
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
"metadata": {},
"source": [
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
"\n",
"This loader exposes the Hubspot connector as a document loader, allowing you to load various Hubspot objects as documents."
]
},
{
"cell_type": "markdown",
"id": "6847a40c",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "3b06fbde",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "e3e9dc79",
"metadata": {},
"source": [
"First, you need to install the `airbyte-source-hubspot` python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4d35e4e0",
"metadata": {},
"outputs": [],
"source": [
"#!pip install airbyte-source-hubspot"
]
},
{
"cell_type": "markdown",
"id": "ae855210",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "markdown",
"id": "02208f52",
"metadata": {},
"source": [
"Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/hubspot/) for details about how to configure the reader.\n",
"The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-hubspot/source_hubspot/spec.yaml](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-hubspot/source_hubspot/spec.yaml).\n",
"\n",
"The general shape looks like this:\n",
"```python\n",
"{\n",
" \"start_date\": \"<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>\",\n",
" \"credentials\": {\n",
" \"credentials_title\": \"Private App Credentials\",\n",
" \"access_token\": \"<access token of your private app>\"\n",
" }\n",
"}\n",
"```\n",
"\n",
"By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89a99e58",
"metadata": {},
"outputs": [],
"source": [
"\n",
"from langchain.document_loaders.airbyte import AirbyteHubspotLoader\n",
"\n",
"config = {\n",
" # your hubspot configuration\n",
"}\n",
"\n",
"loader = AirbyteHubspotLoader(config=config, stream_name=\"products\") # check the documentation linked above for a list of all streams"
]
},
{
"cell_type": "markdown",
"id": "2cea23fc",
"metadata": {},
"source": [
"Now you can load documents the usual way"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dae75cdb",
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "4a93dc2a",
"metadata": {},
"source": [
"As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1782db09",
"metadata": {},
"outputs": [],
"source": [
"docs_iterator = loader.lazy_load()"
]
},
{
"cell_type": "markdown",
"id": "3a124086",
"metadata": {},
"source": [
"Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To process documents, create a class inheriting from the base loader and implement the `_handle_records` method yourself:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5671395d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"\n",
"def handle_record(record, id):\n",
" return Document(page_content=record.data[\"title\"], metadata=record.data)\n",
"\n",
"loader = AirbyteHubspotLoader(config=config, record_handler=handle_record, stream_name=\"products\")\n",
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "223eb8bc",
"metadata": {},
"source": [
"## Incremental loads\n",
"\n",
"Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.\n",
"\n",
"To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7061e735",
"metadata": {},
"outputs": [],
"source": [
"last_state = loader.last_state # store safely\n",
"\n",
"incremental_loader = AirbyteHubspotLoader(config=config, stream_name=\"products\", state=last_state)\n",
"\n",
"new_docs = incremental_loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,213 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1f3a5ebf",
"metadata": {},
"source": [
"# Airbyte Salesforce"
]
},
{
"cell_type": "markdown",
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
"metadata": {},
"source": [
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
"\n",
"This loader exposes the Salesforce connector as a document loader, allowing you to load various Salesforce objects as documents."
]
},
{
"cell_type": "markdown",
"id": "6847a40c",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "3b06fbde",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "e3e9dc79",
"metadata": {},
"source": [
"First, you need to install the `airbyte-source-salesforce` python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4d35e4e0",
"metadata": {},
"outputs": [],
"source": [
"#!pip install airbyte-source-salesforce"
]
},
{
"cell_type": "markdown",
"id": "ae855210",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "markdown",
"id": "02208f52",
"metadata": {},
"source": [
"Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/salesforce/) for details about how to configure the reader.\n",
"The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-salesforce/source_salesforce/spec.yaml](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-salesforce/source_salesforce/spec.yaml).\n",
"\n",
"The general shape looks like this:\n",
"```python\n",
"{\n",
" \"client_id\": \"<oauth client id>\",\n",
" \"client_secret\": \"<oauth client secret>\",\n",
" \"refresh_token\": \"<oauth refresh token>\",\n",
" \"start_date\": \"<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>\",\n",
" \"is_sandbox\": False, # set to True if you're using a sandbox environment\n",
" \"streams_criteria\": [ # Array of filters for salesforce objects that should be loadable\n",
" {\"criteria\": \"exacts\", \"value\": \"Account\"}, # Exact name of salesforce object\n",
" {\"criteria\": \"starts with\", \"value\": \"Asset\"}, # Prefix of the name\n",
" # Other allowed criteria: ends with, contains, starts not with, ends not with, not contains, not exacts\n",
" ],\n",
"}\n",
"```\n",
"\n",
"By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89a99e58",
"metadata": {},
"outputs": [],
"source": [
"\n",
"from langchain.document_loaders.airbyte import AirbyteSalesforceLoader\n",
"\n",
"config = {\n",
" # your salesforce configuration\n",
"}\n",
"\n",
"loader = AirbyteSalesforceLoader(config=config, stream_name=\"asset\") # check the documentation linked above for a list of all streams"
]
},
{
"cell_type": "markdown",
"id": "2cea23fc",
"metadata": {},
"source": [
"Now you can load documents the usual way"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dae75cdb",
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "4a93dc2a",
"metadata": {},
"source": [
"As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1782db09",
"metadata": {},
"outputs": [],
"source": [
"docs_iterator = loader.lazy_load()"
]
},
{
"cell_type": "markdown",
"id": "3a124086",
"metadata": {},
"source": [
"Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5671395d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"\n",
"def handle_record(record, id):\n",
" return Document(page_content=record.data[\"title\"], metadata=record.data)\n",
"\n",
"loader = AirbyteSalesforceLoader(config=config, record_handler=handle_record, stream_name=\"asset\")\n",
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "223eb8bc",
"metadata": {},
"source": [
"## Incremental loads\n",
"\n",
"Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.\n",
"\n",
"To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7061e735",
"metadata": {},
"outputs": [],
"source": [
"last_state = loader.last_state # store safely\n",
"\n",
"incremental_loader = AirbyteSalesforceLoader(config=config, stream_name=\"asset\", state=last_state)\n",
"\n",
"new_docs = incremental_loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,209 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1f3a5ebf",
"metadata": {},
"source": [
"# Airbyte Shopify"
]
},
{
"cell_type": "markdown",
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
"metadata": {},
"source": [
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
"\n",
"This loader exposes the Shopify connector as a document loader, allowing you to load various Shopify objects as documents."
]
},
{
"cell_type": "markdown",
"id": "6847a40c",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "3b06fbde",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "e3e9dc79",
"metadata": {},
"source": [
"First, you need to install the `airbyte-source-shopify` python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4d35e4e0",
"metadata": {},
"outputs": [],
"source": [
"#!pip install airbyte-source-shopify"
]
},
{
"cell_type": "markdown",
"id": "ae855210",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "markdown",
"id": "02208f52",
"metadata": {},
"source": [
"Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/shopify/) for details about how to configure the reader.\n",
"The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-shopify/source_shopify/spec.json](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-shopify/source_shopify/spec.json).\n",
"\n",
"The general shape looks like this:\n",
"```python\n",
"{\n",
" \"start_date\": \"<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>\",\n",
" \"shop\": \"<name of the shop you want to retrieve documents from>\",\n",
" \"credentials\": {\n",
" \"auth_method\": \"api_password\",\n",
" \"api_password\": \"<your api password>\"\n",
" }\n",
"}\n",
"```\n",
"\n",
"By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89a99e58",
"metadata": {},
"outputs": [],
"source": [
"\n",
"from langchain.document_loaders.airbyte import AirbyteShopifyLoader\n",
"\n",
"config = {\n",
" # your shopify configuration\n",
"}\n",
"\n",
"loader = AirbyteShopifyLoader(config=config, stream_name=\"orders\") # check the documentation linked above for a list of all streams"
]
},
{
"cell_type": "markdown",
"id": "2cea23fc",
"metadata": {},
"source": [
"Now you can load documents the usual way"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dae75cdb",
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "4a93dc2a",
"metadata": {},
"source": [
"As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1782db09",
"metadata": {},
"outputs": [],
"source": [
"docs_iterator = loader.lazy_load()"
]
},
{
"cell_type": "markdown",
"id": "3a124086",
"metadata": {},
"source": [
"Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5671395d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"\n",
"def handle_record(record, id):\n",
" return Document(page_content=record.data[\"title\"], metadata=record.data)\n",
"\n",
"loader = AirbyteShopifyLoader(config=config, record_handler=handle_record, stream_name=\"orders\")\n",
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "223eb8bc",
"metadata": {},
"source": [
"## Incremental loads\n",
"\n",
"Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.\n",
"\n",
"To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7061e735",
"metadata": {},
"outputs": [],
"source": [
"last_state = loader.last_state # store safely\n",
"\n",
"incremental_loader = AirbyteShopifyLoader(config=config, stream_name=\"orders\", state=last_state)\n",
"\n",
"new_docs = incremental_loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,206 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1f3a5ebf",
"metadata": {},
"source": [
"# Airbyte Stripe"
]
},
{
"cell_type": "markdown",
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
"metadata": {},
"source": [
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
"\n",
"This loader exposes the Stripe connector as a document loader, allowing you to load various Stripe objects as documents."
]
},
{
"cell_type": "markdown",
"id": "6847a40c",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "3b06fbde",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "e3e9dc79",
"metadata": {},
"source": [
"First, you need to install the `airbyte-source-stripe` python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4d35e4e0",
"metadata": {},
"outputs": [],
"source": [
"#!pip install airbyte-source-stripe"
]
},
{
"cell_type": "markdown",
"id": "ae855210",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "markdown",
"id": "02208f52",
"metadata": {},
"source": [
"Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/stripe/) for details about how to configure the reader.\n",
"The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-stripe/source_stripe/spec.yaml](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-stripe/source_stripe/spec.yaml).\n",
"\n",
"The general shape looks like this:\n",
"```python\n",
"{\n",
" \"client_secret\": \"<secret key>\",\n",
" \"account_id\": \"<account id>\",\n",
" \"start_date\": \"<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>\",\n",
"}\n",
"```\n",
"\n",
"By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89a99e58",
"metadata": {},
"outputs": [],
"source": [
"\n",
"from langchain.document_loaders.airbyte import AirbyteStripeLoader\n",
"\n",
"config = {\n",
" # your stripe configuration\n",
"}\n",
"\n",
"loader = AirbyteStripeLoader(config=config, stream_name=\"invoices\") # check the documentation linked above for a list of all streams"
]
},
{
"cell_type": "markdown",
"id": "2cea23fc",
"metadata": {},
"source": [
"Now you can load documents the usual way"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dae75cdb",
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "4a93dc2a",
"metadata": {},
"source": [
"As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1782db09",
"metadata": {},
"outputs": [],
"source": [
"docs_iterator = loader.lazy_load()"
]
},
{
"cell_type": "markdown",
"id": "3a124086",
"metadata": {},
"source": [
"Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5671395d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"\n",
"def handle_record(record, id):\n",
" return Document(page_content=record.data[\"title\"], metadata=record.data)\n",
"\n",
"loader = AirbyteStripeLoader(config=config, record_handler=handle_record, stream_name=\"invoices\")\n",
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "223eb8bc",
"metadata": {},
"source": [
"## Incremental loads\n",
"\n",
"Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.\n",
"\n",
"To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7061e735",
"metadata": {},
"outputs": [],
"source": [
"last_state = loader.last_state # store safely\n",
"\n",
"incremental_loader = AirbyteStripeLoader(config=config, record_handler=handle_record, stream_name=\"invoices\", state=last_state)\n",
"\n",
"new_docs = incremental_loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,209 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1f3a5ebf",
"metadata": {},
"source": [
"# Airbyte Typeform"
]
},
{
"cell_type": "markdown",
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
"metadata": {},
"source": [
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
"\n",
"This loader exposes the Typeform connector as a document loader, allowing you to load various Typeform objects as documents."
]
},
{
"cell_type": "markdown",
"id": "6847a40c",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "3b06fbde",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "e3e9dc79",
"metadata": {},
"source": [
"First, you need to install the `airbyte-source-typeform` python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4d35e4e0",
"metadata": {},
"outputs": [],
"source": [
"#!pip install airbyte-source-typeform"
]
},
{
"cell_type": "markdown",
"id": "ae855210",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "markdown",
"id": "02208f52",
"metadata": {},
"source": [
"Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/typeform/) for details about how to configure the reader.\n",
"The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-typeform/source_typeform/spec.json](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-typeform/source_typeform/spec.json).\n",
"\n",
"The general shape looks like this:\n",
"```python\n",
"{\n",
" \"credentials\": {\n",
" \"auth_type\": \"Private Token\",\n",
" \"access_token\": \"<your auth token>\"\n",
" },\n",
" \"start_date\": \"<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>\",\n",
" \"form_ids\": [\"<id of form to load records for>\"] # if omitted, records from all forms will be loaded\n",
"}\n",
"```\n",
"\n",
"By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89a99e58",
"metadata": {},
"outputs": [],
"source": [
"\n",
"from langchain.document_loaders.airbyte import AirbyteTypeformLoader\n",
"\n",
"config = {\n",
" # your typeform configuration\n",
"}\n",
"\n",
"loader = AirbyteTypeformLoader(config=config, stream_name=\"forms\") # check the documentation linked above for a list of all streams"
]
},
{
"cell_type": "markdown",
"id": "2cea23fc",
"metadata": {},
"source": [
"Now you can load documents the usual way"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dae75cdb",
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "4a93dc2a",
"metadata": {},
"source": [
"As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1782db09",
"metadata": {},
"outputs": [],
"source": [
"docs_iterator = loader.lazy_load()"
]
},
{
"cell_type": "markdown",
"id": "3a124086",
"metadata": {},
"source": [
"Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5671395d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"\n",
"def handle_record(record, id):\n",
" return Document(page_content=record.data[\"title\"], metadata=record.data)\n",
"\n",
"loader = AirbyteTypeformLoader(config=config, record_handler=handle_record, stream_name=\"forms\")\n",
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "223eb8bc",
"metadata": {},
"source": [
"## Incremental loads\n",
"\n",
"Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.\n",
"\n",
"To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7061e735",
"metadata": {},
"outputs": [],
"source": [
"last_state = loader.last_state # store safely\n",
"\n",
"incremental_loader = AirbyteTypeformLoader(config=config, record_handler=handle_record, stream_name=\"forms\", state=last_state)\n",
"\n",
"new_docs = incremental_loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,210 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1f3a5ebf",
"metadata": {},
"source": [
"# Airbyte Zendesk Support"
]
},
{
"cell_type": "markdown",
"id": "35ac77b1-449b-44f7-b8f3-3494d55c286e",
"metadata": {},
"source": [
">[Airbyte](https://github.com/airbytehq/airbyte) is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.\n",
"\n",
"This loader exposes the Zendesk Support connector as a document loader, allowing you to load various objects as documents."
]
},
{
"cell_type": "markdown",
"id": "6847a40c",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "3b06fbde",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "e3e9dc79",
"metadata": {},
"source": [
"First, you need to install the `airbyte-source-zendesk-support` python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4d35e4e0",
"metadata": {},
"outputs": [],
"source": [
"#!pip install airbyte-source-zendesk-support"
]
},
{
"cell_type": "markdown",
"id": "ae855210",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "markdown",
"id": "02208f52",
"metadata": {},
"source": [
"Check out the [Airbyte documentation page](https://docs.airbyte.com/integrations/sources/zendesk-support/) for details about how to configure the reader.\n",
"The JSON schema the config object should adhere to can be found on Github: [https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-zendesk-support/source_zendesk_support/spec.json](https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-zendesk-support/source_zendesk_support/spec.json).\n",
"\n",
"The general shape looks like this:\n",
"```python\n",
"{\n",
" \"subdomain\": \"<your zendesk subdomain>\",\n",
" \"start_date\": \"<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>\",\n",
" \"credentials\": {\n",
" \"credentials\": \"api_token\",\n",
" \"email\": \"<your email>\",\n",
" \"api_token\": \"<your api token>\"\n",
" }\n",
"}\n",
"```\n",
"\n",
"By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89a99e58",
"metadata": {},
"outputs": [],
"source": [
"\n",
"from langchain.document_loaders.airbyte import AirbyteZendeskSupportLoader\n",
"\n",
"config = {\n",
" # your zendesk-support configuration\n",
"}\n",
"\n",
"loader = AirbyteZendeskSupportLoader(config=config, stream_name=\"tickets\") # check the documentation linked above for a list of all streams"
]
},
{
"cell_type": "markdown",
"id": "2cea23fc",
"metadata": {},
"source": [
"Now you can load documents the usual way"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dae75cdb",
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "4a93dc2a",
"metadata": {},
"source": [
"As `load` returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the `lazy_load` method which returns an iterator instead:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1782db09",
"metadata": {},
"outputs": [],
"source": [
"docs_iterator = loader.lazy_load()"
]
},
{
"cell_type": "markdown",
"id": "3a124086",
"metadata": {},
"source": [
"Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5671395d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.docstore.document import Document\n",
"\n",
"def handle_record(record, id):\n",
" return Document(page_content=record.data[\"title\"], metadata=record.data)\n",
"\n",
"loader = AirbyteZendeskSupportLoader(config=config, record_handler=handle_record, stream_name=\"tickets\")\n",
"docs = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "223eb8bc",
"metadata": {},
"source": [
"## Incremental loads\n",
"\n",
"Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.\n",
"\n",
"To take advantage of this, store the `last_state` property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7061e735",
"metadata": {},
"outputs": [],
"source": [
"last_state = loader.last_state # store safely\n",
"\n",
"incremental_loader = AirbyteZendeskSupportLoader(config=config, stream_name=\"tickets\", state=last_state)\n",
"\n",
"new_docs = incremental_loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,325 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "62359e08-cf80-4210-a30c-f450000e65b9",
"metadata": {},
"source": [
"# ArcGISLoader\n",
"\n",
"This notebook demonstrates the use of the `langchain.document_loaders.ArcGISLoader` class.\n",
"\n",
"You will need to install the ArcGIS API for Python `arcgis` and, optionally, `bs4.BeautifulSoup`.\n",
"\n",
"You can use an `arcgis.gis.GIS` object for authenticated data loading, or leave it blank to access public data."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b782cab5-0584-4e2a-9073-009fb8dc93a3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import ArcGISLoader\n",
"\n",
"\n",
"url = \"https://maps1.vcgov.org/arcgis/rest/services/Beaches/MapServer/7\"\n",
"\n",
"loader = ArcGISLoader(url)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "aa3053cf-4127-43ea-bf56-e378b348091f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 4.04 ms, sys: 1.63 ms, total: 5.67 ms\n",
"Wall time: 644 ms\n"
]
}
],
"source": [
"%%time\n",
"\n",
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a2444519-9117-4feb-8bb9-8931ce286fa5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"dict_keys(['url', 'layer_description', 'item_description', 'layer_properties'])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].metadata.keys()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "6b6e9107-6a80-4ef7-8149-3013faa2de76",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"KeysView({\n",
" \"currentVersion\": 10.81,\n",
" \"id\": 7,\n",
" \"name\": \"Beach Ramps\",\n",
" \"type\": \"Feature Layer\",\n",
" \"description\": \"\",\n",
" \"geometryType\": \"esriGeometryPoint\",\n",
" \"sourceSpatialReference\": {\n",
" \"wkid\": 2881,\n",
" \"latestWkid\": 2881\n",
" },\n",
" \"copyrightText\": \"\",\n",
" \"parentLayer\": null,\n",
" \"subLayers\": [],\n",
" \"minScale\": 750000,\n",
" \"maxScale\": 0,\n",
" \"drawingInfo\": {\n",
" \"renderer\": {\n",
" \"type\": \"simple\",\n",
" \"symbol\": {\n",
" \"type\": \"esriPMS\",\n",
" \"url\": \"9bb2e5ca499bb68aa3ee0d4e1ecc3849\",\n",
" \"imageData\": \"iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IB2cksfwAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAJJJREFUOI3NkDEKg0AQRZ9kkSnSGBshR7DJqdJYeg7BMpcS0uQWQsqoCLExkcUJzGqT38zw2fcY1rEzbp7vjXz0EXC7gBxs1ABcG/8CYkCcDqwyLqsV+RlV0I/w7PzuJBArr1VB20H58Ls6h+xoFITkTwWpQJX7XSIBAnFwVj7MLAjJV/AC6G3QoAmK+74Lom04THTBEp/HCSc6AAAAAElFTkSuQmCC\",\n",
" \"contentType\": \"image/png\",\n",
" \"width\": 12,\n",
" \"height\": 12,\n",
" \"angle\": 0,\n",
" \"xoffset\": 0,\n",
" \"yoffset\": 0\n",
" },\n",
" \"label\": \"\",\n",
" \"description\": \"\"\n",
" },\n",
" \"transparency\": 0,\n",
" \"labelingInfo\": null\n",
" },\n",
" \"defaultVisibility\": true,\n",
" \"extent\": {\n",
" \"xmin\": -81.09480168806815,\n",
" \"ymin\": 28.858349245353473,\n",
" \"xmax\": -80.77512908572814,\n",
" \"ymax\": 29.41078388840041,\n",
" \"spatialReference\": {\n",
" \"wkid\": 4326,\n",
" \"latestWkid\": 4326\n",
" }\n",
" },\n",
" \"hasAttachments\": false,\n",
" \"htmlPopupType\": \"esriServerHTMLPopupTypeNone\",\n",
" \"displayField\": \"AccessName\",\n",
" \"typeIdField\": null,\n",
" \"subtypeFieldName\": null,\n",
" \"subtypeField\": null,\n",
" \"defaultSubtypeCode\": null,\n",
" \"fields\": [\n",
" {\n",
" \"name\": \"OBJECTID\",\n",
" \"type\": \"esriFieldTypeOID\",\n",
" \"alias\": \"OBJECTID\",\n",
" \"domain\": null\n",
" },\n",
" {\n",
" \"name\": \"Shape\",\n",
" \"type\": \"esriFieldTypeGeometry\",\n",
" \"alias\": \"Shape\",\n",
" \"domain\": null\n",
" },\n",
" {\n",
" \"name\": \"AccessName\",\n",
" \"type\": \"esriFieldTypeString\",\n",
" \"alias\": \"AccessName\",\n",
" \"length\": 40,\n",
" \"domain\": null\n",
" },\n",
" {\n",
" \"name\": \"AccessID\",\n",
" \"type\": \"esriFieldTypeString\",\n",
" \"alias\": \"AccessID\",\n",
" \"length\": 50,\n",
" \"domain\": null\n",
" },\n",
" {\n",
" \"name\": \"AccessType\",\n",
" \"type\": \"esriFieldTypeString\",\n",
" \"alias\": \"AccessType\",\n",
" \"length\": 25,\n",
" \"domain\": null\n",
" },\n",
" {\n",
" \"name\": \"GeneralLoc\",\n",
" \"type\": \"esriFieldTypeString\",\n",
" \"alias\": \"GeneralLoc\",\n",
" \"length\": 100,\n",
" \"domain\": null\n",
" },\n",
" {\n",
" \"name\": \"MilePost\",\n",
" \"type\": \"esriFieldTypeDouble\",\n",
" \"alias\": \"MilePost\",\n",
" \"domain\": null\n",
" },\n",
" {\n",
" \"name\": \"City\",\n",
" \"type\": \"esriFieldTypeString\",\n",
" \"alias\": \"City\",\n",
" \"length\": 50,\n",
" \"domain\": null\n",
" },\n",
" {\n",
" \"name\": \"AccessStatus\",\n",
" \"type\": \"esriFieldTypeString\",\n",
" \"alias\": \"AccessStatus\",\n",
" \"length\": 50,\n",
" \"domain\": null\n",
" },\n",
" {\n",
" \"name\": \"Entry_Date_Time\",\n",
" \"type\": \"esriFieldTypeDate\",\n",
" \"alias\": \"Entry_Date_Time\",\n",
" \"length\": 8,\n",
" \"domain\": null\n",
" },\n",
" {\n",
" \"name\": \"DrivingZone\",\n",
" \"type\": \"esriFieldTypeString\",\n",
" \"alias\": \"DrivingZone\",\n",
" \"length\": 50,\n",
" \"domain\": null\n",
" }\n",
" ],\n",
" \"geometryField\": {\n",
" \"name\": \"Shape\",\n",
" \"type\": \"esriFieldTypeGeometry\",\n",
" \"alias\": \"Shape\"\n",
" },\n",
" \"indexes\": null,\n",
" \"subtypes\": [],\n",
" \"relationships\": [],\n",
" \"canModifyLayer\": true,\n",
" \"canScaleSymbols\": false,\n",
" \"hasLabels\": false,\n",
" \"capabilities\": \"Map,Query,Data\",\n",
" \"maxRecordCount\": 1000,\n",
" \"supportsStatistics\": true,\n",
" \"supportsAdvancedQueries\": true,\n",
" \"supportedQueryFormats\": \"JSON, geoJSON\",\n",
" \"isDataVersioned\": false,\n",
" \"ownershipBasedAccessControlForFeatures\": {\n",
" \"allowOthersToQuery\": true\n",
" },\n",
" \"useStandardizedQueries\": true,\n",
" \"advancedQueryCapabilities\": {\n",
" \"useStandardizedQueries\": true,\n",
" \"supportsStatistics\": true,\n",
" \"supportsHavingClause\": true,\n",
" \"supportsCountDistinct\": true,\n",
" \"supportsOrderBy\": true,\n",
" \"supportsDistinct\": true,\n",
" \"supportsPagination\": true,\n",
" \"supportsTrueCurve\": true,\n",
" \"supportsReturningQueryExtent\": true,\n",
" \"supportsQueryWithDistance\": true,\n",
" \"supportsSqlExpression\": true\n",
" },\n",
" \"supportsDatumTransformation\": true,\n",
" \"dateFieldsTimeReference\": null,\n",
" \"supportsCoordinatesQuantization\": true\n",
"})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].metadata['layer_properties'].keys()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "1d132b7d-5a13-4d66-98e8-785ffdf87af0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\"OBJECTID\": 2, \"AccessName\": \"27TH AV\", \"AccessID\": \"NS-141\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"3600 BLK S ATLANTIC AV\", \"MilePost\": 4.83, \"City\": \"NEW SMYRNA BEACH\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691397348000, \"DrivingZone\": \"BOTH\"}\n",
"{\"OBJECTID\": 7, \"AccessName\": \"BEACHWAY AV\", \"AccessID\": \"NS-106\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"1400 N ATLANTIC AV\", \"MilePost\": 1.57, \"City\": \"NEW SMYRNA BEACH\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691397348000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 10, \"AccessName\": \"SEABREEZE BLVD\", \"AccessID\": \"DB-051\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"500 BLK N ATLANTIC AV\", \"MilePost\": 14.24, \"City\": \"DAYTONA BEACH\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691394892000, \"DrivingZone\": \"BOTH\"}\n",
"{\"OBJECTID\": 13, \"AccessName\": \"GRANADA BLVD\", \"AccessID\": \"OB-030\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"20 BLK OCEAN SHORE BLVD\", \"MilePost\": 10.02, \"City\": \"ORMOND BEACH\", \"AccessStatus\": \"4X4 ONLY\", \"Entry_Date_Time\": 1691394952000, \"DrivingZone\": \"BOTH\"}\n",
"{\"OBJECTID\": 16, \"AccessName\": \"INTERNATIONAL SPEEDWAY BLVD\", \"AccessID\": \"DB-059\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"300 BLK S ATLANTIC AV\", \"MilePost\": 15.27, \"City\": \"DAYTONA BEACH\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691395174000, \"DrivingZone\": \"BOTH\"}\n",
"{\"OBJECTID\": 26, \"AccessName\": \"UNIVERSITY BLVD\", \"AccessID\": \"DB-048\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"900 BLK N ATLANTIC AV\", \"MilePost\": 13.74, \"City\": \"DAYTONA BEACH\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691394892000, \"DrivingZone\": \"BOTH\"}\n",
"{\"OBJECTID\": 36, \"AccessName\": \"BEACH ST\", \"AccessID\": \"PI-097\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"4890 BLK S ATLANTIC AV\", \"MilePost\": 25.85, \"City\": \"PONCE INLET\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691397348000, \"DrivingZone\": \"BOTH\"}\n",
"{\"OBJECTID\": 40, \"AccessName\": \"BOTEFUHR AV\", \"AccessID\": \"DBS-067\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"1900 BLK S ATLANTIC AV\", \"MilePost\": 16.68, \"City\": \"DAYTONA BEACH SHORES\", \"AccessStatus\": \"4X4 ONLY\", \"Entry_Date_Time\": 1691395124000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 41, \"AccessName\": \"SILVER BEACH AV\", \"AccessID\": \"DB-064\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"1000 BLK S ATLANTIC AV\", \"MilePost\": 15.98, \"City\": \"DAYTONA BEACH\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691395174000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 50, \"AccessName\": \"3RD AV\", \"AccessID\": \"NS-118\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"1200 BLK HILL ST\", \"MilePost\": 3.25, \"City\": \"NEW SMYRNA BEACH\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691397348000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 58, \"AccessName\": \"DUNLAWTON BLVD\", \"AccessID\": \"DBS-078\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"3400 BLK S ATLANTIC AV\", \"MilePost\": 20.61, \"City\": \"DAYTONA BEACH SHORES\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691397348000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 63, \"AccessName\": \"MILSAP RD\", \"AccessID\": \"OB-037\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"700 BLK S ATLANTIC AV\", \"MilePost\": 11.52, \"City\": \"ORMOND BEACH\", \"AccessStatus\": \"4X4 ONLY\", \"Entry_Date_Time\": 1691394952000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 68, \"AccessName\": \"EMILIA AV\", \"AccessID\": \"DBS-082\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"3790 BLK S ATLANTIC AV\", \"MilePost\": 21.38, \"City\": \"DAYTONA BEACH SHORES\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691397348000, \"DrivingZone\": \"BOTH\"}\n",
"{\"OBJECTID\": 92, \"AccessName\": \"FLAGLER AV\", \"AccessID\": \"NS-110\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"500 BLK FLAGLER AV\", \"MilePost\": 2.57, \"City\": \"NEW SMYRNA BEACH\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691397348000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 94, \"AccessName\": \"CRAWFORD RD\", \"AccessID\": \"NS-108\", \"AccessType\": \"OPEN VEHICLE RAMP - PASS\", \"GeneralLoc\": \"800 BLK N ATLANTIC AV\", \"MilePost\": 2.19, \"City\": \"NEW SMYRNA BEACH\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691397348000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 122, \"AccessName\": \"HARTFORD AV\", \"AccessID\": \"DB-043\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"1890 BLK N ATLANTIC AV\", \"MilePost\": 12.76, \"City\": \"DAYTONA BEACH\", \"AccessStatus\": \"CLOSED - SEASONAL\", \"Entry_Date_Time\": 1691394832000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 125, \"AccessName\": \"WILLIAMS AV\", \"AccessID\": \"DB-042\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"2200 BLK N ATLANTIC AV\", \"MilePost\": 12.5, \"City\": \"DAYTONA BEACH\", \"AccessStatus\": \"4X4 ONLY\", \"Entry_Date_Time\": 1691394952000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 134, \"AccessName\": \"CARDINAL DR\", \"AccessID\": \"OB-036\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"600 BLK S ATLANTIC AV\", \"MilePost\": 11.27, \"City\": \"ORMOND BEACH\", \"AccessStatus\": \"4X4 ONLY\", \"Entry_Date_Time\": 1691394952000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 229, \"AccessName\": \"EL PORTAL ST\", \"AccessID\": \"DBS-076\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"3200 BLK S ATLANTIC AV\", \"MilePost\": 20.04, \"City\": \"DAYTONA BEACH SHORES\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691397348000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 230, \"AccessName\": \"HARVARD DR\", \"AccessID\": \"OB-038\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"900 BLK S ATLANTIC AV\", \"MilePost\": 11.72, \"City\": \"ORMOND BEACH\", \"AccessStatus\": \"4X4 ONLY\", \"Entry_Date_Time\": 1691394952000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 232, \"AccessName\": \"VAN AV\", \"AccessID\": \"DBS-075\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"3100 BLK S ATLANTIC AV\", \"MilePost\": 19.6, \"City\": \"DAYTONA BEACH SHORES\", \"AccessStatus\": \"OPEN\", \"Entry_Date_Time\": 1691397348000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 233, \"AccessName\": \"ROCKEFELLER DR\", \"AccessID\": \"OB-034\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"400 BLK S ATLANTIC AV\", \"MilePost\": 10.9, \"City\": \"ORMOND BEACH\", \"AccessStatus\": \"CLOSED - SEASONAL\", \"Entry_Date_Time\": 1691394832000, \"DrivingZone\": \"YES\"}\n",
"{\"OBJECTID\": 235, \"AccessName\": \"MINERVA RD\", \"AccessID\": \"DBS-069\", \"AccessType\": \"OPEN VEHICLE RAMP\", \"GeneralLoc\": \"2300 BLK S ATLANTIC AV\", \"MilePost\": 17.52, \"City\": \"DAYTONA BEACH SHORES\", \"AccessStatus\": \"4X4 ONLY\", \"Entry_Date_Time\": 1691395124000, \"DrivingZone\": \"YES\"}\n"
]
}
],
"source": [
"for doc in docs:\n",
" print(doc.page_content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,101 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ad553e51",
"metadata": {},
"source": [
"# Async Chromium\n",
"\n",
"Chromium is one of the browsers supported by Playwright, a library used to control browser automation. \n",
"\n",
"By running `p.chromium.launch(headless=True)`, we are launching a headless instance of Chromium. \n",
"\n",
"Headless mode means that the browser is running without a graphical user interface.\n",
"\n",
"`AsyncChromiumLoader` load the page, and then we use `Html2TextTransformer` to trasnform to text."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1c3a4c19",
"metadata": {},
"outputs": [],
"source": [
"! pip install -q playwright beautifulsoup4\n",
"! playwright install"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dd2cdea7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'<!DOCTYPE html><html lang=\"en\"><head><script src=\"https://s0.2mdn.net/instream/video/client.js\" asyn'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.document_loaders import AsyncChromiumLoader\n",
"urls = [\"https://www.wsj.com\"]\n",
"loader = AsyncChromiumLoader(urls)\n",
"docs = loader.load()\n",
"docs[0].page_content[0:100]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "013caa7e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Skip to Main ContentSkip to SearchSkip to... Select * Top News * What's News *\\nFeatured Stories * Retirement * Life & Arts * Hip-Hop * Sports * Video *\\nEconomy * Real Estate * Sports * CMO * CIO * CFO * Risk & Compliance *\\nLogistics Report * Sustainable Business * Heard on the Street * Barrons *\\nMarketWatch * Mansion Global * Penta * Opinion * Journal Reports * Sponsored\\nOffers Explore Our Brands * WSJ * * * * * Barron's * * * * * MarketWatch * * *\\n* * IBD # The Wall Street Journal SubscribeSig\""
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.document_transformers import Html2TextTransformer\n",
"html2text = Html2TextTransformer()\n",
"docs_transformed = html2text.transform_documents(docs)\n",
"docs_transformed[0].page_content[0:500]"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,13 @@
<?xml version="1.0" encoding="UTF-8"?>
<opml version="1.0">
<head>
<title>Sample RSS feed subscriptions</title>
</head>
<body>
<outline text="Tech" title="Tech">
<outline type="rss" text="Engadget" title="Engadget" xmlUrl="http://www.engadget.com/rss-full.xml" htmlUrl="http://www.engadget.com"/>
<outline type="rss" text="Ars Technica - All content" title="Ars Technica - All content" xmlUrl="http://feeds.arstechnica.com/arstechnica/index/" htmlUrl="https://arstechnica.com"/>
</outline>
</body>
</opml>

View File

@@ -73,13 +73,27 @@
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "41c8a46f",
"metadata": {},
"source": [
"If you want to use an alternative loader, you can provide a custom function, for example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eba3002d",
"metadata": {},
"outputs": [],
"source": []
"source": [
"from langchain.document_loaders import PyPDFLoader\n",
"def load_pdf(file_path):\n",
" return PyPDFLoader(file_path)\n",
"\n",
"loader = GCSFileLoader(project_name=\"aist\", bucket=\"testing-hwc\", blob=\"fake.pdf\", loader_func=load_pdf)"
]
}
],
"metadata": {

View File

@@ -9,66 +9,16 @@
"\n",
"GROBID is a machine learning library for extracting, parsing, and re-structuring raw documents.\n",
"\n",
"It is particularly good for sturctured PDFs, like academic papers.\n",
"It is designed and expected to be used to parse academic papers, where it works particularly well. Note: if the articles supplied to Grobid are large documents (e.g. dissertations) exceeding a certain number of elements, they might not be processed. \n",
"\n",
"This loader uses GROBIB to parse PDFs into `Documents` that retain metadata associated with the section of text.\n",
"This loader uses Grobid to parse PDFs into `Documents` that retain metadata associated with the section of text.\n",
"\n",
"---\n",
"The best approach is to install Grobid via docker, see https://grobid.readthedocs.io/en/latest/Grobid-docker/. \n",
"\n",
"For users on `Mac` - \n",
"(Note: additional instructions can be found [here](https://python.langchain.com/docs/extras/integrations/providers/grobid.mdx).)\n",
"\n",
"(Note: additional instructions can be found [here](https://python.langchain.com/docs/ecosystem/integrations/grobid.mdx).)\n",
"\n",
"Install Java (Apple Silicon):\n",
"```\n",
"$ arch -arm64 brew install openjdk@11\n",
"$ brew --prefix openjdk@11\n",
"/opt/homebrew/opt/openjdk@ 11\n",
"```\n",
"\n",
"In `~/.zshrc`:\n",
"```\n",
"export JAVA_HOME=/opt/homebrew/opt/openjdk@11\n",
"export PATH=$JAVA_HOME/bin:$PATH\n",
"```\n",
"\n",
"Then, in Terminal:\n",
"```\n",
"$ source ~/.zshrc\n",
"```\n",
"\n",
"Confirm install:\n",
"```\n",
"$ which java\n",
"/opt/homebrew/opt/openjdk@11/bin/java\n",
"$ java -version \n",
"openjdk version \"11.0.19\" 2023-04-18\n",
"OpenJDK Runtime Environment Homebrew (build 11.0.19+0)\n",
"OpenJDK 64-Bit Server VM Homebrew (build 11.0.19+0, mixed mode)\n",
"```\n",
"\n",
"Then, get [Grobid](https://grobid.readthedocs.io/en/latest/Install-Grobid/#getting-grobid):\n",
"```\n",
"$ curl -LO https://github.com/kermitt2/grobid/archive/0.7.3.zip\n",
"$ unzip 0.7.3.zip\n",
"```\n",
" \n",
"Build\n",
"```\n",
"$ ./gradlew clean install\n",
"```\n",
"\n",
"Then, run the server:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2d8992fc",
"metadata": {},
"outputs": [],
"source": [
"! get_ipython().system_raw('nohup ./gradlew run > grobid.log 2>&1 &')"
"Once grobid is up-and-running you can interact as described below. \n"
]
},
{

View File

@@ -0,0 +1,178 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c83b6a4c",
"metadata": {},
"source": [
"# Huawei OBS Directory\n",
"The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c2191935",
"metadata": {},
"outputs": [],
"source": [
"# Install the required package\n",
"# pip install esdk-obs-python"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "55fca3b4",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import OBSDirectoryLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c3ed419f",
"metadata": {},
"outputs": [],
"source": [
"endpoint = \"your-endpoint\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "3428fd4e",
"metadata": {},
"outputs": [],
"source": [
"# Configure your access credentials\\n\n",
"config = {\n",
" \"ak\": \"your-access-key\",\n",
" \"sk\": \"your-secret-key\"\n",
"}\n",
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint, config=config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9beede9f",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "1e20a839",
"metadata": {},
"source": [
"## Specify a Prefix for Loading\n",
"If you want to load objects with a specific prefix from the bucket, you can use the following code:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "125f311d",
"metadata": {},
"outputs": [],
"source": [
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint, config=config, prefix=\"test_prefix\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b3488037",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "84c82c0a",
"metadata": {},
"source": [
"## Get Authentication Information from ECS\n",
"If your langchain is deployed on Huawei Cloud ECS and [Agency is set up](https://support.huaweicloud.com/intl/en-us/usermanual-ecs/ecs_03_0166.html#section7), the loader can directly get the security token from ECS without needing access key and secret key. "
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "1db99969",
"metadata": {},
"outputs": [],
"source": [
"config = {\"get_token_from_ecs\": True}\n",
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint, config=config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "57dd9f35",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "30205d25",
"metadata": {},
"source": [
"## Use a Public Bucket\n",
"If your bucket's bucket policy allows anonymous access (anonymous users have `listBucket` and `GetObject` permissions), you can directly load the objects without configuring the `config` parameter."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4dfa2ef0",
"metadata": {},
"outputs": [],
"source": [
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "67d4c1d0",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,180 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4394a872",
"metadata": {},
"source": [
"# Huawei OBS File\n",
"The following code demonstrates how to load an object from the Huawei OBS (Object Storage Service) as document."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c43d811b",
"metadata": {},
"outputs": [],
"source": [
"# Install the required package\n",
"# pip install esdk-obs-python"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5e16bae6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders.obs_file import OBSFileLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "75cc7e7c",
"metadata": {},
"outputs": [],
"source": [
"endpoint = \"your-endpoint\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f9816984",
"metadata": {},
"outputs": [],
"source": [
"from obs import ObsClient\n",
"obs_client = ObsClient(access_key_id=\"your-access-key\", secret_access_key=\"your-secret-key\", server=endpoint)\n",
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\", client=obs_client)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6143b39b",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "633e05ca",
"metadata": {},
"source": [
"## Each Loader with Separate Authentication Information\n",
"If you don't need to reuse OBS connections between different loaders, you can directly configure the `config`. The loader will use the config information to initialize its own OBS client."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a5dd6a5d",
"metadata": {},
"outputs": [],
"source": [
"# Configure your access credentials\\n\n",
"config = {\n",
" \"ak\": \"your-access-key\",\n",
" \"sk\": \"your-secret-key\"\n",
"}\n",
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\",endpoint=endpoint, config=config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a741f1c",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "1e2e611c",
"metadata": {},
"source": [
"## Get Authentication Information from ECS\n",
"If your langchain is deployed on Huawei Cloud ECS and [Agency is set up](https://support.huaweicloud.com/intl/en-us/usermanual-ecs/ecs_03_0166.html#section7), the loader can directly get the security token from ECS without needing access key and secret key. "
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "338fafef",
"metadata": {},
"outputs": [],
"source": [
"config = {\"get_token_from_ecs\": True}\n",
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\", endpoint=endpoint, config=config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "73976c55",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "b77aa18c",
"metadata": {},
"source": [
"## Access a Publicly Accessible Object\n",
"If the object you want to access allows anonymous user access (anonymous users have `GetObject` permission), you can directly load the object without configuring the `config` parameter."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "df83d121",
"metadata": {},
"outputs": [],
"source": [
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\", endpoint=endpoint)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82a844ba",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,192 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2dfc4698",
"metadata": {},
"source": [
"# News URL\n",
"\n",
"This covers how to load HTML news articles from a list of URLs into a document format that we can use downstream."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "16c3699e",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-02T21:18:18.886031400Z",
"start_time": "2023-08-02T21:18:17.682345Z"
}
},
"outputs": [],
"source": [
"from langchain.document_loaders import NewsURLLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "836fbac1",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-02T21:18:18.895539800Z",
"start_time": "2023-08-02T21:18:18.895539800Z"
}
},
"outputs": [],
"source": [
"urls = [\n",
" \"https://www.bbc.com/news/world-us-canada-66388172\",\n",
" \"https://www.bbc.com/news/entertainment-arts-66384971\",\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "33089aba-ff74-4d00-8f40-9449c29587cc",
"metadata": {},
"source": [
"Pass in urls to load them into Documents"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "00f46fda",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-02T21:18:19.227074500Z",
"start_time": "2023-08-02T21:18:18.895539800Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"First article: page_content='In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that \"no reasonable person\" would view her claims as fact. Neither she nor her representatives have commented.' metadata={'title': 'Donald Trump indictment: What do we know about the six co-conspirators?', 'link': 'https://www.bbc.com/news/world-us-canada-66388172', 'authors': [], 'language': 'en', 'description': 'Six people accused of helping Mr Trump undermine the election have been described by prosecutors.', 'publish_date': None}\n",
"\n",
"Second article: page_content='Ms Williams added: \"If there\\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\\'t have to go through that same experience, I\\'m going to do that.\"' metadata={'title': \"Lizzo dancers Arianna Davis and Crystal Williams: 'No one speaks out, they are scared'\", 'link': 'https://www.bbc.com/news/entertainment-arts-66384971', 'authors': [], 'language': 'en', 'description': 'The US pop star is being sued for sexual harassment and fat-shaming but has yet to comment.', 'publish_date': None}\n"
]
}
],
"source": [
"loader = NewsURLLoader(urls=urls)\n",
"data = loader.load()\n",
"print(\"First article: \", data[0])\n",
"print(\"\\nSecond article: \", data[1])"
]
},
{
"cell_type": "markdown",
"source": [
"Use nlp=True to run nlp analysis and generate keywords + summary"
],
"metadata": {
"collapsed": false
},
"id": "98ac26c488315bff"
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b68a26b3",
"metadata": {
"ExecuteTime": {
"end_time": "2023-08-02T21:18:19.585758200Z",
"start_time": "2023-08-02T21:18:19.227074500Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"First article: page_content='In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that \"no reasonable person\" would view her claims as fact. Neither she nor her representatives have commented.' metadata={'title': 'Donald Trump indictment: What do we know about the six co-conspirators?', 'link': 'https://www.bbc.com/news/world-us-canada-66388172', 'authors': [], 'language': 'en', 'description': 'Six people accused of helping Mr Trump undermine the election have been described by prosecutors.', 'publish_date': None, 'keywords': ['powell', 'know', 'donald', 'trump', 'review', 'indictment', 'telling', 'view', 'reasonable', 'person', 'testimony', 'coconspirators', 'riot', 'representatives', 'claims'], 'summary': 'In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that \"no reasonable person\" would view her claims as fact.\\nNeither she nor her representatives have commented.'}\n",
"\n",
"Second article: page_content='Ms Williams added: \"If there\\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\\'t have to go through that same experience, I\\'m going to do that.\"' metadata={'title': \"Lizzo dancers Arianna Davis and Crystal Williams: 'No one speaks out, they are scared'\", 'link': 'https://www.bbc.com/news/entertainment-arts-66384971', 'authors': [], 'language': 'en', 'description': 'The US pop star is being sued for sexual harassment and fat-shaming but has yet to comment.', 'publish_date': None, 'keywords': ['davis', 'lizzo', 'singers', 'experience', 'crystal', 'ensure', 'arianna', 'theres', 'williams', 'power', 'going', 'dancers', 'im', 'speaks', 'work', 'ms', 'scared'], 'summary': 'Ms Williams added: \"If there\\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\\'t have to go through that same experience, I\\'m going to do that.\"'}\n"
]
}
],
"source": [
"loader = NewsURLLoader(urls=urls, nlp=True)\n",
"data = loader.load()\n",
"print(\"First article: \", data[0])\n",
"print(\"\\nSecond article: \", data[1])"
]
},
{
"cell_type": "code",
"execution_count": 5,
"outputs": [
{
"data": {
"text/plain": "['powell',\n 'know',\n 'donald',\n 'trump',\n 'review',\n 'indictment',\n 'telling',\n 'view',\n 'reasonable',\n 'person',\n 'testimony',\n 'coconspirators',\n 'riot',\n 'representatives',\n 'claims']"
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data[0].metadata['keywords']"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-08-02T21:18:19.585758200Z",
"start_time": "2023-08-02T21:18:19.585758200Z"
}
},
"id": "ae37e004e0284b1d"
},
{
"cell_type": "code",
"execution_count": 6,
"outputs": [
{
"data": {
"text/plain": "'In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that \"no reasonable person\" would view her claims as fact.\\nNeither she nor her representatives have commented.'"
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data[0].metadata['summary']"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2023-08-02T21:18:19.598966800Z",
"start_time": "2023-08-02T21:18:19.594950200Z"
}
},
"id": "7676155fb175e53e"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,144 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Nuclia Understanding API document loader\n",
"\n",
"[Nuclia](https://nuclia.com) automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing.\n",
"\n",
"The Nuclia Understanding API supports the processing of unstructured data, including text, web pages, documents, and audio/video contents. It extracts all texts wherever they are (using speech-to-text or OCR when needed), it also extracts metadata, embedded files (like images in a PDF), and web links. If machine learning is enabled, it identifies entities, provides a summary of the content and generates embeddings for all the sentences.\n",
"\n",
"To use the Nuclia Understanding API, you need to have a Nuclia account. You can create one for free at [https://nuclia.cloud](https://nuclia.cloud), and then [create a NUA key](https://docs.nuclia.dev/docs/docs/using/understanding/intro)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#!pip install --upgrade protobuf\n",
"#!pip install nucliadb-protos"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"NUCLIA_ZONE\"] = \"<YOUR_ZONE>\" # e.g. europe-1\n",
"os.environ[\"NUCLIA_NUA_KEY\"] = \"<YOUR_API_KEY>\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"To use the Nuclia document loader, you need to instantiate a `NucliaUnderstandingAPI` tool:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools.nuclia import NucliaUnderstandingAPI\n",
"\n",
"nua = NucliaUnderstandingAPI(enable_ml=False)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders.nuclia import NucliaLoader\n",
"\n",
"loader = NucliaLoader(\"./interview.mp4\", nua)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can now call the `load` the document in a loop until you get the document."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"\n",
"pending = True\n",
"while pending:\n",
" time.sleep(15)\n",
" docs = loader.load()\n",
" if len(docs) > 0:\n",
" print(docs[0].page_content)\n",
" print(docs[0].metadata)\n",
" pending = False\n",
" else:\n",
" print(\"waiting...\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retrieved information\n",
"\n",
"Nuclia returns the following information:\n",
"\n",
"- file metadata\n",
"- extracted text\n",
"- nested text (like text in an embedded image)\n",
"- paragraphs and sentences splitting (defined by the position of their first and last characters, plus start time and end time for a video or audio file)\n",
"- links\n",
"- a thumbnail\n",
"- embedded files\n",
"\n",
"Note:\n",
"\n",
" Generated files (thumbnail, extracted embedded files, etc.) are provided as a token. You can download them with the [`/processing/download` endpoint](https://docs.nuclia.dev/docs/api#operation/Download_binary_file_processing_download_get).\n",
"\n",
" Also at any level, if an attribute exceeds a certain size, it will be put in a downloadable file and will be replaced in the document by a file pointer. This will consist of `{\"file\": {\"uri\": \"JWT_TOKEN\"}}`. The rule is that if the size of the message is greater than 1000000 characters, the biggest parts will be moved to downloadable files. First, the compression process will target vectors. If that is not enough, it will target large field metadata, and finally it will target extracted text.\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,139 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "3df0dcf8",
"metadata": {},
"source": [
"# PubMed\n",
"\n",
">[PubMed®](https://pubmed.ncbi.nlm.nih.gov/) by `The National Center for Biotechnology Information, National Library of Medicine` comprises more than 35 million citations for biomedical literature from `MEDLINE`, life science journals, and online books. Citations may include links to full text content from `PubMed Central` and publisher web sites."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "aecaff63",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import PubMedLoader"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f2f7e8d3",
"metadata": {},
"outputs": [],
"source": [
"loader = PubMedLoader(\"chatgpt\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ed115aa1",
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b68d3264-b893-45e4-8ab0-077b25a586dc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"3"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9f4626d2-068d-4aed-9ffe-ad754ad4b4cd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'uid': '37548997',\n",
" 'Title': 'Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.',\n",
" 'Published': '2023-08-07',\n",
" 'Copyright Information': '©Robin J Borchert, Charlotte R Hickman, Jack Pepys, Timothy J Sadler. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2023.'}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[1].metadata"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8000f687-b500-4cce-841b-70d6151304da",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\\nOBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills, professionalism, and ethics.\\nMETHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors.\\nCONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics.\""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[1].page_content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1070e571-697d-4c33-9a4f-0b2dd6909629",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -9,7 +9,7 @@
"\n",
"We may want to process load all URLs under a root directory.\n",
"\n",
"For example, let's look at the [LangChain JS documentation](https://js.langchain.com/docs/).\n",
"For example, let's look at the [Python 3.9 Document](https://docs.python.org/3.9/).\n",
"\n",
"This has many interesting child pages that we may want to read in bulk.\n",
"\n",
@@ -19,13 +19,28 @@
" \n",
"We do this using the `RecursiveUrlLoader`.\n",
"\n",
"This also gives us the flexibility to exclude some children (e.g., the `api` directory with > 800 child pages)."
"This also gives us the flexibility to exclude some children, customize the extractor, and more."
]
},
{
"cell_type": "markdown",
"id": "1be8094f",
"metadata": {},
"source": [
"# Parameters\n",
"- url: str, the target url to crawl.\n",
"- exclude_dirs: Optional[str], webpage directories to exclude.\n",
"- use_async: Optional[bool], wether to use async requests, using async requests is usually faster in large tasks. However, async will disable the lazy loading feature(the function still works, but it is not lazy). By default, it is set to False.\n",
"- extractor: Optional[Callable[[str], str]], a function to extract the text of the document from the webpage, by default it returns the page as it is. It is recommended to use tools like goose3 and beautifulsoup to extract the text. By default, it just returns the page as it is.\n",
"- max_depth: Optional[int] = None, the maximum depth to crawl. By default, it is set to 2. If you need to crawl the whole website, set it to a number that is large enough would simply do the job.\n",
"- timeout: Optional[int] = None, the timeout for each request, in the unit of seconds. By default, it is set to 10.\n",
"- prevent_outside: Optional[bool] = None, whether to prevent crawling outside the root url. By default, it is set to True."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2e3532b2",
"execution_count": null,
"id": "23c18539",
"metadata": {},
"outputs": [],
"source": [
@@ -42,13 +57,15 @@
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d69e5620",
"execution_count": null,
"id": "55394afe",
"metadata": {},
"outputs": [],
"source": [
"url = \"https://js.langchain.com/docs/modules/memory/examples/\"\n",
"loader = RecursiveUrlLoader(url=url)\n",
"from bs4 import BeautifulSoup as Soup\n",
"\n",
"url = \"https://docs.python.org/3.9/\"\n",
"loader = RecursiveUrlLoader(url=url, max_depth=2, extractor=lambda x: Soup(x, \"html.parser\").text)\n",
"docs = loader.load()"
]
},
@@ -61,7 +78,7 @@
{
"data": {
"text/plain": [
"12"
"'\\n\\n\\n\\n\\nPython Frequently Asked Questions — Python 3.'"
]
},
"execution_count": 3,
@@ -70,19 +87,21 @@
}
],
"source": [
"len(docs)"
"docs[0].page_content[:50]"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "89355b7c",
"id": "13bd7e16",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\n\\n\\n\\nBuffer Window Memory | 🦜️🔗 Langchain\\n\\n\\n\\n\\n\\nSki'"
"{'source': 'https://docs.python.org/3.9/library/index.html',\n",
" 'title': 'The Python Standard Library — Python 3.9.17 documentation',\n",
" 'language': None}"
]
},
"execution_count": 4,
@@ -91,137 +110,48 @@
}
],
"source": [
"docs[0].page_content[:50]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "13bd7e16",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'source': 'https://js.langchain.com/docs/modules/memory/examples/buffer_window_memory',\n",
" 'title': 'Buffer Window Memory | 🦜️🔗 Langchain',\n",
" 'description': 'BufferWindowMemory keeps track of the back-and-forths in conversation, and then uses a window of size k to surface the last k back-and-forths to use as memory.',\n",
" 'language': 'en'}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].metadata"
"docs[-1].metadata"
]
},
{
"cell_type": "markdown",
"id": "40fc13ef",
"id": "5866e5a6",
"metadata": {},
"source": [
"Now, let's try a more extensive example, the `docs` root dir.\n",
"\n",
"We will skip everything under `api`.\n",
"\n",
"For this, we can `lazy_load` each page as we crawl the tree, using `WebBaseLoader` to load each as we go."
"However, since it's hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it's needed. Most of the time, the returned results are good enough."
]
},
{
"cell_type": "markdown",
"id": "4ec8ecef",
"metadata": {},
"source": [
"Testing on LangChain docs."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5c938b9f",
"execution_count": 2,
"id": "349b5598",
"metadata": {},
"outputs": [],
"source": [
"url = \"https://js.langchain.com/docs/\"\n",
"exclude_dirs = [\"https://js.langchain.com/docs/api/\"]\n",
"loader = RecursiveUrlLoader(url=url, exclude_dirs=exclude_dirs)\n",
"# Lazy load each\n",
"docs = [print(doc) or doc for doc in loader.lazy_load()]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "30ff61d3",
"metadata": {},
"outputs": [],
"source": [
"# Load all pages\n",
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "457e30f3",
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"text/plain": [
"188"
"8"
]
},
"execution_count": 8,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"url = \"https://js.langchain.com/docs/modules/memory/integrations/\"\n",
"loader = RecursiveUrlLoader(url=url)\n",
"docs = loader.load()\n",
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "bca80b4a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\n\\n\\n\\nAgent Simulations | 🦜️🔗 Langchain\\n\\n\\n\\n\\n\\nSkip t'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].page_content[:50]"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "df97cf22",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'source': 'https://js.langchain.com/docs/use_cases/agent_simulations/',\n",
" 'title': 'Agent Simulations | 🦜️🔗 Langchain',\n",
" 'description': 'Agent simulations involve taking multiple agents and having them interact with each other.',\n",
" 'language': 'en'}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].metadata"
]
}
],
"metadata": {

View File

@@ -0,0 +1,311 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2dfc4698",
"metadata": {},
"source": [
"# RSS Feeds\n",
"\n",
"This covers how to load HTML news articles from a list of RSS feed URLs into a document format that we can use downstream."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e7c2cd52-c1f7-4a06-8539-b0117da91fba",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!pip install feedparser newspaper3k listparser"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "16c3699e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import RSSFeedLoader"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "836fbac1",
"metadata": {},
"outputs": [],
"source": [
"urls = [\"https://news.ycombinator.com/rss\"]"
]
},
{
"cell_type": "markdown",
"id": "33089aba-ff74-4d00-8f40-9449c29587cc",
"metadata": {},
"source": [
"Pass in urls to load them into Documents"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "00f46fda",
"metadata": {},
"outputs": [],
"source": [
"loader = RSSFeedLoader(urls=urls)\n",
"data = loader.load()\n",
"print(len(data))"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "b447468cc42266d0",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(next Rich)\n",
"\n",
"04 August 2023\n",
"\n",
"Rich Hickey\n",
"\n",
"It is with a mixture of heartache and optimism that I announce today my (long planned) retirement from commercial software development, and my employment at Nubank. Its been thrilling to see Clojure and Datomic successfully applied at scale.\n",
"\n",
"I look forward to continuing to lead ongoing work maintaining and enhancing Clojure with Alex, Stu, Fogus and many others, as an independent developer once again. We have many useful things planned for 1.12 and beyond. The community remains friendly, mature and productive, and is taking Clojure into many interesting new domains.\n",
"\n",
"I want to highlight and thank Nubank for their ongoing sponsorship of Alex, Fogus and the core team, as well as the Clojure community at large.\n",
"\n",
"Stu will continue to lead the development of Datomic at Nubank, where the Datomic team grows and thrives. Im particularly excited to see where the new free availability of Datomic will lead.\n",
"\n",
"My time with Cognitect remains the highlight of my career. I have learned from absolutely everyone on our team, and am forever grateful to all for our interactions. There are too many people to thank here, but I must extend my sincerest appreciation and love to Stu and Justin for (repeatedly) taking a risk on me and my ideas, and for being the best of partners and friends, at all times fully embodying the notion of integrity. And of course to Alex Miller - who possesses in abundance many skills I lack, and without whose indomitable spirit, positivity and friendship Clojure would not have become what it did.\n",
"\n",
"I have made many friends through Clojure and Cognitect, and I hope to nurture those friendships moving forward.\n",
"\n",
"Retirement returns me to the freedom and independence I had when originally developing Clojure. The journey continues!\n"
]
}
],
"source": [
"print(data[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "c36d3b0d329faf2a",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"You can pass arguments to the NewsURLLoader which it uses to load articles."
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "5fdada62470d3019",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error fetching or processing https://twitter.com/andrewmccalip/status/1687405505604734978, exception: You must `parse()` an article first!\n",
"Error processing entry https://twitter.com/andrewmccalip/status/1687405505604734978, exception: list index out of range\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"13\n"
]
}
],
"source": [
"loader = RSSFeedLoader(urls=urls, nlp=True)\n",
"data = loader.load()\n",
"print(len(data))"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "11d71963f7735c1d",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": [
"['nubank',\n",
" 'alex',\n",
" 'stu',\n",
" 'taking',\n",
" 'team',\n",
" 'remains',\n",
" 'rich',\n",
" 'clojure',\n",
" 'thank',\n",
" 'planned',\n",
" 'datomic']"
]
},
"execution_count": 37,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data[0].metadata['keywords']"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "9fb64ba0e8780966",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": [
"'Its been thrilling to see Clojure and Datomic successfully applied at scale.\\nI look forward to continuing to lead ongoing work maintaining and enhancing Clojure with Alex, Stu, Fogus and many others, as an independent developer once again.\\nThe community remains friendly, mature and productive, and is taking Clojure into many interesting new domains.\\nI want to highlight and thank Nubank for their ongoing sponsorship of Alex, Fogus and the core team, as well as the Clojure community at large.\\nStu will continue to lead the development of Datomic at Nubank, where the Datomic team grows and thrives.'"
]
},
"execution_count": 38,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data[0].metadata['summary']"
]
},
{
"cell_type": "markdown",
"id": "98ac26c488315bff",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"You can also use an OPML file such as a Feedly export. Pass in either a URL or the OPML contents."
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "8b6f07ae526a897c",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error fetching http://www.engadget.com/rss-full.xml, exception: Error fetching http://www.engadget.com/rss-full.xml, exception: document declared as us-ascii, but parsed as utf-8\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"20\n"
]
}
],
"source": [
"with open(\"example_data/sample_rss_feeds.opml\", \"r\") as f:\n",
" loader = RSSFeedLoader(opml=f.read())\n",
"data = loader.load()\n",
"print(len(data))"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "b68a26b3",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": [
"'The electric vehicle startup Fisker made a splash in Huntington Beach last night, showing off a range of new EVs it plans to build alongside the Fisker Ocean, which is slowly beginning deliveries in Europe and the US. With shades of Lotus circa 2010, it seems there\\'s something for most tastes, with a powerful four-door GT, a versatile pickup truck, and an affordable electric city car.\\n\\n\"We want the world to know that we have big plans and intend to move into several different segments, redefining each with our unique blend of design, innovation, and sustainability,\" said CEO Henrik Fisker.\\n\\nStarting with the cheapest, the Fisker PEAR—a cutesy acronym for \"Personal Electric Automotive Revolution\"—is said to use 35 percent fewer parts than other small EVs. Although it\\'s a smaller car, the PEAR seats six thanks to front and rear bench seats. Oh, and it has a frunk, which the company is calling the \"froot,\" something that will satisfy some British English speakers like Ars\\' friend and motoring journalist Jonny Smith.\\n\\nBut most exciting is the price—starting at $29,900 and scheduled for 2025. Fisker plans to contract with Foxconn to build the PEAR in Lordstown, Ohio, meaning it would be eligible for federal tax incentives.\\n\\nAdvertisement\\n\\nThe Fisker Alaska is the company\\'s pickup truck, built on a modified version of the platform used by the Ocean. It has an extendable cargo bed, which can be as little as 4.5 feet (1,371 mm) or as much as 9.2 feet (2,804 mm) long. Fisker claims it will be both the lightest EV pickup on sale and the most sustainable pickup truck in the world. Range will be an estimated 230240 miles (370386 km).\\n\\nThis, too, is slated for 2025, and also at a relatively affordable price, starting at $45,400. Fisker hopes to build this car in North America as well, although it isn\\'t saying where that might take place.\\n\\nFinally, there\\'s the Ronin, a four-door GT that bears more than a passing resemblance to the Fisker Karma, Henrik Fisker\\'s 2012 creation. There\\'s no price for this one, but Fisker says its all-wheel drive powertrain will boast 1,000 hp (745 kW) and will hit 60 mph from a standing start in two seconds—just about as fast as modern tires will allow. Expect a massive battery in this one, as Fisker says it\\'s targeting a 600-mile (956 km) range.\\n\\n\"Innovation and sustainability, along with design, are our three brand values. By 2027, we intend to produce the worlds first climate-neutral vehicle, and as our customers reinvent their relationships with mobility, we want to be a leader in software-defined transportation,\" Fisker said.'"
]
},
"execution_count": 40,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data[0].page_content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5a0cbe8-18a6-4af2-b447-7abb8b734451",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,320 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "bda1f3f5",
"metadata": {},
"source": [
"# TensorFlow Datasets\n",
"\n",
">[TensorFlow Datasets](https://www.tensorflow.org/datasets) is a collection of datasets ready to use, with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed as [tf.data.Datasets](https://www.tensorflow.org/api_docs/python/tf/data/Dataset), enabling easy-to-use and high-performance input pipelines. To get started see the [guide](https://www.tensorflow.org/datasets/overview) and the [list of datasets](https://www.tensorflow.org/datasets/catalog/overview#all_datasets).\n",
"\n",
"This notebook shows how to load `TensorFlow Datasets` into a Document format that we can use downstream."
]
},
{
"cell_type": "markdown",
"id": "1b7a1eef-7bf7-4e7d-8bfc-c4e27c9488cb",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "2abd5578-aa3d-46b9-99af-8b262f0b3df8",
"metadata": {},
"source": [
"You need to install `tensorflow` and `tensorflow-datasets` python packages."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2e589036-351e-4c63-b734-c9a05fadb880",
"metadata": {},
"outputs": [],
"source": [
"!pip install tensorflow"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b674aaea-ed3a-4541-8414-260a8f67f623",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"!pip install tensorflow-datasets"
]
},
{
"cell_type": "markdown",
"id": "95f05e1c-195e-4e2b-ae8e-8d6637f15be6",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "markdown",
"id": "e66e211e-9419-4dbb-b3cd-afc3cf984305",
"metadata": {},
"source": [
"As an example, we use the [`mlqa/en` dataset](https://www.tensorflow.org/datasets/catalog/mlqa#mlqaen).\n",
"\n",
">`MLQA` (`Multilingual Question Answering Dataset`) is a benchmark dataset for evaluating multilingual question answering performance. The dataset consists of 7 languages: Arabic, German, Spanish, English, Hindi, Vietnamese, Chinese.\n",
">\n",
">- Homepage: https://github.com/facebookresearch/MLQA\n",
">- Source code: `tfds.datasets.mlqa.Builder`\n",
">- Download size: 72.21 MiB\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8968d645-c81c-4e3b-82bc-a3cbb5ddd93a",
"metadata": {},
"outputs": [],
"source": [
"# Feature structure of `mlqa/en` dataset:\n",
"\n",
"FeaturesDict({\n",
" 'answers': Sequence({\n",
" 'answer_start': int32,\n",
" 'text': Text(shape=(), dtype=string),\n",
" }),\n",
" 'context': Text(shape=(), dtype=string),\n",
" 'id': string,\n",
" 'question': Text(shape=(), dtype=string),\n",
" 'title': Text(shape=(), dtype=string),\n",
"})"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "30fcaba5-cc9b-4a0e-a8f4-c047018451c2",
"metadata": {},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import tensorflow_datasets as tfds"
]
},
{
"cell_type": "code",
"execution_count": 78,
"id": "e307dd67-029e-4ee3-a65f-e085c09b0b8b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<_TakeDataset element_spec={'answers': {'answer_start': TensorSpec(shape=(None,), dtype=tf.int32, name=None), 'text': TensorSpec(shape=(None,), dtype=tf.string, name=None)}, 'context': TensorSpec(shape=(), dtype=tf.string, name=None), 'id': TensorSpec(shape=(), dtype=tf.string, name=None), 'question': TensorSpec(shape=(), dtype=tf.string, name=None), 'title': TensorSpec(shape=(), dtype=tf.string, name=None)}>"
]
},
"execution_count": 78,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# try directly access this dataset:\n",
"ds = tfds.load('mlqa/en', split='test')\n",
"ds = ds.take(1) # Only take a single example\n",
"ds"
]
},
{
"cell_type": "markdown",
"id": "5c9c4b08-d94f-4b53-add0-93769811644e",
"metadata": {},
"source": [
"Now we have to create a custom function to convert dataset sample into a Document.\n",
"\n",
"This is a requirement. There is no standard format for the TF datasets that's why we need to make a custom transformation function.\n",
"\n",
"Let's use `context` field as the `Document.page_content` and place other fields in the `Document.metadata`.\n"
]
},
{
"cell_type": "code",
"execution_count": 72,
"id": "78844113-f8d8-48a8-8105-685280b6cfa5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='After completing the journey around South America, on 23 February 2006, Queen Mary 2 met her namesake, the original RMS Queen Mary, which is permanently docked at Long Beach, California. Escorted by a flotilla of smaller ships, the two Queens exchanged a \"whistle salute\" which was heard throughout the city of Long Beach. Queen Mary 2 met the other serving Cunard liners Queen Victoria and Queen Elizabeth 2 on 13 January 2008 near the Statue of Liberty in New York City harbour, with a celebratory fireworks display; Queen Elizabeth 2 and Queen Victoria made a tandem crossing of the Atlantic for the meeting. This marked the first time three Cunard Queens have been present in the same location. Cunard stated this would be the last time these three ships would ever meet, due to Queen Elizabeth 2\\'s impending retirement from service in late 2008. However this would prove not to be the case, as the three Queens met in Southampton on 22 April 2008. Queen Mary 2 rendezvoused with Queen Elizabeth 2 in Dubai on Saturday 21 March 2009, after the latter ship\\'s retirement, while both ships were berthed at Port Rashid. With the withdrawal of Queen Elizabeth 2 from Cunard\\'s fleet and its docking in Dubai, Queen Mary 2 became the only ocean liner left in active passenger service.' metadata={'id': '5116f7cccdbf614d60bcd23498274ffd7b1e4ec7', 'title': 'RMS Queen Mary 2', 'question': 'What year did Queen Mary 2 complete her journey around South America?', 'answer': '2006'}\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2023-08-03 14:27:08.482983: W tensorflow/core/kernels/data/cache_dataset_ops.cc:854] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.\n"
]
}
],
"source": [
"def decode_to_str(item: tf.Tensor) -> str:\n",
" return item.numpy().decode('utf-8')\n",
"\n",
"def mlqaen_example_to_document(example: dict) -> Document:\n",
" return Document(\n",
" page_content=decode_to_str(example[\"context\"]),\n",
" metadata={\n",
" \"id\": decode_to_str(example[\"id\"]),\n",
" \"title\": decode_to_str(example[\"title\"]),\n",
" \"question\": decode_to_str(example[\"question\"]),\n",
" \"answer\": decode_to_str(example[\"answers\"][\"text\"][0]),\n",
" },\n",
" )\n",
" \n",
" \n",
"for example in ds: \n",
" doc = mlqaen_example_to_document(example)\n",
" print(doc)\n",
" break"
]
},
{
"cell_type": "code",
"execution_count": 73,
"id": "2d43c834-5145-4793-9558-8e301ccaf3b4",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"from langchain.document_loaders import TensorflowDatasetLoader\n",
"\n",
"loader = TensorflowDatasetLoader(\n",
" dataset_name=\"mlqa/en\",\n",
" split_name=\"test\",\n",
" load_max_docs=3,\n",
" sample_to_document_function=mlqaen_example_to_document,\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "e29b954c-1407-4797-ae21-6ba8937156be",
"metadata": {},
"source": [
"`TensorflowDatasetLoader` has these parameters:\n",
"- `dataset_name`: the name of the dataset to load\n",
"- `split_name`: the name of the split to load. Defaults to \"train\".\n",
"- `load_max_docs`: a limit to the number of loaded documents. Defaults to 100.\n",
"- `sample_to_document_function`: a function that converts a dataset sample to a Document\n"
]
},
{
"cell_type": "code",
"execution_count": 74,
"id": "700e4ef2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2023-08-03 14:27:22.998964: W tensorflow/core/kernels/data/cache_dataset_ops.cc:854] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.\n"
]
},
{
"data": {
"text/plain": [
"3"
]
},
"execution_count": 74,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = loader.load()\n",
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": 76,
"id": "9138940a-e9fe-4145-83e8-77589b5272c9",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'After completing the journey around South America, on 23 February 2006, Queen Mary 2 met her namesake, the original RMS Queen Mary, which is permanently docked at Long Beach, California. Escorted by a flotilla of smaller ships, the two Queens exchanged a \"whistle salute\" which was heard throughout the city of Long Beach. Queen Mary 2 met the other serving Cunard liners Queen Victoria and Queen Elizabeth 2 on 13 January 2008 near the Statue of Liberty in New York City harbour, with a celebratory fireworks display; Queen Elizabeth 2 and Queen Victoria made a tandem crossing of the Atlantic for the meeting. This marked the first time three Cunard Queens have been present in the same location. Cunard stated this would be the last time these three ships would ever meet, due to Queen Elizabeth 2\\'s impending retirement from service in late 2008. However this would prove not to be the case, as the three Queens met in Southampton on 22 April 2008. Queen Mary 2 rendezvoused with Queen Elizabeth 2 in Dubai on Saturday 21 March 2009, after the latter ship\\'s retirement, while both ships were berthed at Port Rashid. With the withdrawal of Queen Elizabeth 2 from Cunard\\'s fleet and its docking in Dubai, Queen Mary 2 became the only ocean liner left in active passenger service.'"
]
},
"execution_count": 76,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].page_content"
]
},
{
"cell_type": "code",
"execution_count": 77,
"id": "2f7f7832-fe4d-4a58-892d-bb987cdbed0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'id': '5116f7cccdbf614d60bcd23498274ffd7b1e4ec7',\n",
" 'title': 'RMS Queen Mary 2',\n",
" 'question': 'What year did Queen Mary 2 complete her journey around South America?',\n",
" 'answer': '2006'}"
]
},
"execution_count": 77,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].metadata"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "125d073c-4f4f-4ae6-a0c7-9e9db3cc8d69",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -18,8 +18,7 @@
"outputs": [],
"source": [
"# # Install package\n",
"!pip install \"unstructured[local-inference]\"\n",
"!pip install layoutparser[layoutmodels,tesseract]"
"!pip install \"unstructured[all-docs]\"\n"
]
},
{

View File

@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "e48afb8d",
"metadata": {},
@@ -11,7 +12,8 @@
"\n",
"Below we show how to easily go from a YouTube url to text to chat!\n",
"\n",
"We wil use the `OpenAIWhisperParser`, which will use the OpenAI Whisper API to transcribe audio to text.\n",
"We wil use the `OpenAIWhisperParser`, which will use the OpenAI Whisper API to transcribe audio to text, \n",
"and the `OpenAIWhisperParserLocal` for local support and running on private clouds or on premise.\n",
"\n",
"Note: You will need to have an `OPENAI_API_KEY` supplied."
]
@@ -24,7 +26,7 @@
"outputs": [],
"source": [
"from langchain.document_loaders.generic import GenericLoader\n",
"from langchain.document_loaders.parsers import OpenAIWhisperParser\n",
"from langchain.document_loaders.parsers import OpenAIWhisperParser, OpenAIWhisperParserLocal\n",
"from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader"
]
},
@@ -46,7 +48,8 @@
"outputs": [],
"source": [
"! pip install yt_dlp\n",
"! pip install pydub"
"! pip install pydub\n",
"! pip install librosa"
]
},
{
@@ -63,6 +66,18 @@
"Let's take the first lecture of Andrej Karpathy's YouTube course as an example! "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8682f256",
"metadata": {},
"outputs": [],
"source": [
"# set a flag to switch between local and remote parsing\n",
"# change this to True if you want to use local parsing\n",
"local = False"
]
},
{
"cell_type": "code",
"execution_count": 2,
@@ -102,7 +117,10 @@
"save_dir = \"~/Downloads/YouTube\"\n",
"\n",
"# Transcribe the videos to text\n",
"loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())\n",
"if local:\n",
" loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParserLocal())\n",
"else:\n",
" loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())\n",
"docs = loader.load()"
]
},
@@ -275,7 +293,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@@ -289,7 +307,12 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.10.11"
},
"vscode": {
"interpreter": {
"hash": "97cc609b13305c559618ec78a438abc56230b9381f827f22d070313b9a1f3777"
}
}
},
"nbformat": 4,

View File

@@ -0,0 +1,95 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2ed9a4c2",
"metadata": {},
"source": [
"# Beautiful Soup\n",
"\n",
"Beautiful Soup offers fine-grained control over HTML content, enabling specific tag extraction, removal, and content cleaning. \n",
"\n",
"It's suited for cases where you want to extract specific information and clean up the HTML content according to your needs.\n",
"\n",
"For example, we can scrape text content within `<p>, <li>, <div>, and <a>` tags from the HTML content:\n",
"\n",
"* `<p>`: The paragraph tag. It defines a paragraph in HTML and is used to group together related sentences and/or phrases.\n",
" \n",
"* `<li>`: The list item tag. It is used within ordered (`<ol>`) and unordered (`<ul>`) lists to define individual items within the list.\n",
" \n",
"* `<div>`: The division tag. It is a block-level element used to group other inline or block-level elements.\n",
" \n",
"* `<a>`: The anchor tag. It is used to define hyperlinks."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dd710e5b",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import AsyncChromiumLoader\n",
"from langchain.document_transformers import BeautifulSoupTransformer\n",
"\n",
"# Load HTML\n",
"loader = AsyncChromiumLoader([\"https://www.wsj.com\"])\n",
"html = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "052b64dd",
"metadata": {},
"outputs": [],
"source": [
"# Transform\n",
"bs_transformer = BeautifulSoupTransformer()\n",
"docs_transformed = bs_transformer.transform_documents(html,tags_to_extract=[\"p\", \"li\", \"div\", \"a\"])"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b53a5307",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Conservative legal activists are challenging Amazon, Comcast and others using many of the same tools that helped kill affirmative-action programs in colleges.1,2099 min read U.S. stock indexes fell and government-bond prices climbed, after Moodys lowered credit ratings for 10 smaller U.S. banks and said it was reviewing ratings for six larger ones. The Dow industrials dropped more than 150 points.3 min read Penn Entertainments Barstool Sportsbook app will be rebranded as ESPN Bet this fall as '"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs_transformed[0].page_content[0:500]"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,103 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Nuclia Understanding API document transformer\n",
"\n",
"[Nuclia](https://nuclia.com) automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing.\n",
"\n",
"The Nuclia Understanding API document transformer splits text into paragraphs and sentences, identifies entities, provides a summary of the text and generates embeddings for all the sentences.\n",
"\n",
"To use the Nuclia Understanding API, you need to have a Nuclia account. You can create one for free at [https://nuclia.cloud](https://nuclia.cloud), and then [create a NUA key](https://docs.nuclia.dev/docs/docs/using/understanding/intro).\n",
"\n",
"from langchain.document_transformers.nuclia_text_transform import NucliaTextTransformer"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#!pip install --upgrade protobuf\n",
"#!pip install nucliadb-protos"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"NUCLIA_ZONE\"] = \"<YOUR_ZONE>\" # e.g. europe-1\n",
"os.environ[\"NUCLIA_NUA_KEY\"] = \"<YOUR_API_KEY>\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"To use the Nuclia document transformer, you need to instantiate a `NucliaUnderstandingAPI` tool with `enable_ml` set to `True`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools.nuclia import NucliaUnderstandingAPI\n",
"\n",
"nua = NucliaUnderstandingAPI(enable_ml=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The Nuclia document transformer must be called in async mode, so you need to use the `atransform_documents` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import asyncio\n",
"\n",
"from langchain.document_transformers.nuclia_text_transform import NucliaTextTransformer\n",
"from langchain.schema.document import Document\n",
"\n",
"\n",
"async def process():\n",
" documents = [\n",
" Document(page_content=\"<TEXT 1>\", metadata={}),\n",
" Document(page_content=\"<TEXT 2>\", metadata={}),\n",
" Document(page_content=\"<TEXT 3>\", metadata={}),\n",
" ]\n",
" nuclia_transformer = NucliaTextTransformer(nua)\n",
" transformed_documents = await nuclia_transformer.atransform_documents(documents)\n",
" print(transformed_documents)\n",
"\n",
"\n",
"asyncio.run(process())"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,242 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "cc6caafa",
"metadata": {},
"source": [
"# Fireworks\n",
"\n",
">[Fireworks](https://app.fireworks.ai/) accelerates product development on generative AI by creating an innovative AI experiment and production platform. \n",
"\n",
"This example goes over how to use LangChain to interact with `Fireworks` models."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "60b6dbb2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms.fireworks import Fireworks, FireworksChat\n",
"from langchain import PromptTemplate, LLMChain\n",
"from langchain.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
")\n",
"import os"
]
},
{
"cell_type": "markdown",
"id": "ccff689e",
"metadata": {},
"source": [
"# Setup\n",
"\n",
"Contact Fireworks AI for the an API Key to access our models\n",
"\n",
"Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9ca87a2e",
"metadata": {},
"outputs": [],
"source": [
"# Initialize a Fireworks LLM\n",
"os.environ['FIREWORKS_API_KEY'] = \"<YOUR_API_KEY>\" # Change this to your own API key\n",
"llm = Fireworks(model_id=\"accounts/fireworks/models/llama-v2-13b-chat\")"
]
},
{
"cell_type": "markdown",
"id": "acc24d0c",
"metadata": {},
"source": [
"# Calling the Model\n",
"\n",
"You can use the LLMs to call the model for specified prompt(s). \n",
"\n",
"Currently supported models: \n",
"\n",
"* Falcon\n",
" * `accounts/fireworks/models/falcon-7b`\n",
" * `accounts/fireworks/models/falcon-40b-w8a16`\n",
"* Llama 2\n",
" * `accounts/fireworks/models/llama-v2-7b`\n",
" * `accounts/fireworks/models/llama-v2-7b-w8a16`\n",
" * `accounts/fireworks/models/llama-v2-7b-chat`\n",
" * `accounts/fireworks/models/llama-v2-7b-chat-w8a16`\n",
" * `accounts/fireworks/models/llama-v2-13b`\n",
" * `accounts/fireworks/models/llama-v2-13b-w8a16`\n",
" * `accounts/fireworks/models/llama-v2-13b-chat`\n",
" * `accounts/fireworks/models/llama-v2-13b-chat-w8a16`\n",
" * `accounts/fireworks/models/llama-v2-70b-chat-4gpu`\n",
"* StarCoder\n",
" * `accounts/fireworks/models/starcoder-1b-w8a16-1gpu`\n",
" * `accounts/fireworks/models/starcoder-3b-w8a16-1gpu`\n",
" * `accounts/fireworks/models/starcoder-7b-w8a16-1gpu`\n",
" * `accounts/fireworks/models/starcoder-16b-w8a16`\n",
"\n",
"See the full, most up-to-date list on [app.fireworks.ai](https://app.fireworks.ai)."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "bf0a425c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Is it Tom Brady, Aaron Rodgers, or someone else? It's a tough question to answer, and there are strong arguments for each of these quarterbacks. Here are some of the reasons why each of these quarterbacks could be considered the best:\n",
"\n",
"Tom Brady:\n",
"\n",
"* He has the most Super Bowl wins (6) of any quarterback in NFL history.\n",
"* He has been named Super Bowl MVP four times, more than any other player.\n",
"* He has led the New England Patriots to 18 playoff victories, the most in NFL history.\n",
"* He has thrown for over 70,000 yards in his career, the most of any quarterback in NFL history.\n",
"* He has thrown for 50 or more touchdowns in a season four times, the most of any quarterback in NFL history.\n",
"\n",
"Aaron Rodgers:\n",
"\n",
"* He has led the Green Bay Packers to a Super Bowl victory in 2010.\n",
"* He has been named Super Bowl MVP once.\n",
"* He has thrown for over 40,000 yards in his career, the most of any quarterback in NFL history.\n",
"* He has thrown for 40 or more touchdowns in a season three times, the most of any quarterback in NFL history.\n",
"* He has a career passer rating of 103.1, the highest of any quarterback in NFL history.\n",
"\n",
"So, who's the best quarterback in the NFL? It's a tough call, but here's my opinion:\n",
"\n",
"I think Aaron Rodgers is the best quarterback in the NFL right now. He has led the Packers to a Super Bowl victory and has had some incredible seasons, including the 2011 season when he threw for 45 touchdowns and just 6 interceptions. He has a strong arm, great accuracy, and is incredibly mobile for a quarterback of his size. He also has a great sense of timing and knows when to take risks and when to play it safe.\n",
"\n",
"Tom Brady is a close second, though. He has an incredible track record of success, including six Super Bowl victories, and has been one of the most consistent quarterbacks in the league for the past two decades. He has a strong arm and is incredibly accurate\n"
]
}
],
"source": [
"# Single prompt\n",
"output = llm(\"Who's the best quarterback in the NFL?\")\n",
"print(output)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "afc7de6f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[Generation(text='\\nThe best cricket player in 2016 is a matter of opinion, but some of the top contenders for the title include:\\n\\n1. Virat Kohli (India): Kohli had a phenomenal year in 2016, scoring over 1,000 runs in Test cricket, including four centuries, and averaging over 70. He also scored heavily in ODI cricket, with an average of over 80.\\n2. Steve Smith (Australia): Smith had a remarkable year in 2016, leading Australia to a Test series victory in India and scoring over 1,000 runs in the format, including five centuries. He also averaged over 60 in ODI cricket.\\n3. KL Rahul (India): Rahul had a breakout year in 2016, scoring over 1,000 runs in Test cricket, including four centuries, and averaging over 60. He also scored heavily in ODI cricket, with an average of over 70.\\n4. Joe Root (England): Root had a solid year in 2016, scoring over 1,000 runs in Test cricket, including four centuries, and averaging over 50. He also scored heavily in ODI cricket, with an average of over 80.\\n5. Quinton de Kock (South Africa): De Kock had a remarkable year in 2016, scoring over 1,000 runs in ODI cricket, including six centuries, and averaging over 80. He also scored heavily in Test cricket, with an average of over 50.\\n\\nThese are just a few of the top contenders for the title of best cricket player in 2016, but there were many other talented players who also had impressive years. Ultimately, the answer to this question is subjective and depends on individual opinions and criteria for evaluation.', generation_info=None)], [Generation(text=\"\\nThis is a tough one, as there are so many great players in the league right now. But if I had to choose one, I'd say LeBron James is the best basketball player in the league. He's a once-in-a-generation talent who can dominate the game in so many ways. He's got incredible speed, strength, and court vision, and he's always finding new ways to improve his game. Plus, he's been doing it at an elite level for over a decade now, which is just amazing.\\n\\nBut don't just take my word for it - there are plenty of other great players in the league who could make a strong case for being the best. Guys like Kevin Durant, Steph Curry, James Harden, and Giannis Antetokounmpo are all having incredible seasons, and they've all got their own unique skills and strengths that make them special. So ultimately, it's up to you to decide who you think is the best basketball player in the league.\", generation_info=None)]]\n"
]
}
],
"source": [
"# Calling multiple prompts\n",
"output = llm.generate([\n",
" \"Who's the best cricket player in 2016?\",\n",
" \"Who's the best basketball player in the league?\",\n",
"])\n",
"print(output.generations)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "b801c20d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Kansas City in December is quite cold, with temperatures typically r\n"
]
}
],
"source": [
"# Setting additional parameters: temperature, max_tokens, top_p\n",
"llm = Fireworks(model_id=\"accounts/fireworks/models/llama-v2-13b-chat\", temperature=0.7, max_tokens=15, top_p=1.0)\n",
"print(llm(\"What's the weather like in Kansas City in December?\"))"
]
},
{
"cell_type": "markdown",
"id": "137662a6",
"metadata": {},
"source": [
"# Create and Run Chain\n",
"\n",
"Create a prompt template to be used with the LLM Chain. Once this prompt template is created, initialize the chain with the LLM and prompt template, and run the chain with the specified prompts."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "fd2c6bc1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Naming a company can be a fun and creative process! Here are a few name ideas for a company that makes football helmets:\n",
"\n",
"1. Helix Headgear: This name plays off the idea of the helix shape of a football helmet and could be a memorable and catchy name for a company.\n",
"2. Gridiron Gear: \"Gridiron\" is a term used to describe a football field, and \"gear\" refers to the products the company sells. This name is straightforward and easy to understand.\n",
"3. Cushion Crusaders: This name emphasizes the protective qualities of football helmets and could appeal to customers looking for safety-conscious products.\n",
"4. Helmet Heroes: This name has a fun, heroic tone and could appeal to customers looking for high-quality products.\n",
"5. Tackle Tech: \"Tackle\" is a term used in football to describe a player's attempt to stop an opponent, and \"tech\" refers to the technology used in the helmets. This name could appeal to customers interested in innovative products.\n",
"6. Padded Protection: This name emphasizes the protective qualities of football helmets and could appeal to customers looking for products that prioritize safety.\n",
"7. Gridiron Gear Co.: This name is simple and straightforward, and it clearly conveys the company's focus on football-related products.\n",
"8. Helmet Haven: This name has a soothing, protective tone and could appeal to customers looking for a reliable brand.\n",
"\n",
"Remember to choose a name that reflects your company's values and mission, and that resonates with your target market. Good luck with your company!\n"
]
}
],
"source": [
"human_message_prompt = HumanMessagePromptTemplate.from_template(\"What is a good name for a company that makes {product}?\")\n",
"chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])\n",
"chat = FireworksChat()\n",
"chain = LLMChain(llm=chat, prompt=chat_prompt_template)\n",
"output = chain.run(\"football helmets\")\n",
"\n",
"print(output)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -31,12 +31,11 @@
},
"outputs": [],
"source": [
"from langchain.llms.bedrock import Bedrock\n",
"from langchain.llms import Bedrock\n",
"\n",
"llm = Bedrock(\n",
" credentials_profile_name=\"bedrock-admin\",\n",
" model_id=\"amazon.titan-tg1-large\",\n",
" endpoint_url=\"custom_endpoint_url\",\n",
" model_id=\"amazon.titan-tg1-large\"\n",
")"
]
},

File diff suppressed because one or more lines are too long

View File

@@ -4,12 +4,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Llama-cpp\n",
"# Llama.cpp\n",
"\n",
"[llama-cpp](https://github.com/abetlen/llama-cpp-python) is a Python binding for [llama.cpp](https://github.com/ggerganov/llama.cpp). \n",
"[llama-cpp-python](https://github.com/abetlen/llama-cpp-python) is a Python binding for [llama.cpp](https://github.com/ggerganov/llama.cpp). \n",
"It supports [several LLMs](https://github.com/ggerganov/llama.cpp).\n",
"\n",
"This notebook goes over how to run `llama-cpp` within LangChain."
"This notebook goes over how to run `llama-cpp-python` within LangChain."
]
},
{
@@ -18,7 +18,7 @@
"source": [
"## Installation\n",
"\n",
"There is a bunch of options how to install the llama-cpp package: \n",
"There are different options on how to install the llama-cpp package: \n",
"- only CPU usage\n",
"- CPU + GPU (using one of many BLAS backends)\n",
"- Metal GPU (MacOS with Apple Silicon Chip) \n",
@@ -61,7 +61,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**IMPORTANT**: If you have already installed a cpu only version of the package, you need to reinstall it from scratch: consider the following command: "
"**IMPORTANT**: If you have already installed the CPU only version of the package, you need to reinstall it from scratch. Consider the following command: "
]
},
{
@@ -79,7 +79,7 @@
"source": [
"### Installation with Metal\n",
"\n",
"`lama.cpp` supports Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks. Use the `FORCE_CMAKE=1` environment variable to force the use of cmake and install the pip package for the Metal support ([source](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md)).\n",
"`llama.cpp` supports Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks. Use the `FORCE_CMAKE=1` environment variable to force the use of cmake and install the pip package for the Metal support ([source](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md)).\n",
"\n",
"Example installation with Metal Support:"
]
@@ -143,7 +143,7 @@
"\n",
"#### Compiling and installing\n",
"\n",
"In the same command prompt (anaconda prompt) you set the variables, you can cd into `llama-cpp-python` directory and run the following commands.\n",
"In the same command prompt (anaconda prompt) you set the variables, you can `cd` into `llama-cpp-python` directory and run the following commands.\n",
"\n",
"```\n",
"python setup.py clean\n",
@@ -164,7 +164,9 @@
"source": [
"Make sure you are following all instructions to [install all necessary model files](https://github.com/ggerganov/llama.cpp).\n",
"\n",
"You don't need an `API_TOKEN`!"
"You don't need an `API_TOKEN` as you will run the LLM locally.\n",
"\n",
"It is worth understanding which models are suitable to be used on the desired machine."
]
},
{
@@ -227,7 +229,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"`Llama-v2`"
"Example using a LLaMA 2 7B model"
]
},
{
@@ -304,7 +306,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"`Llama-v1`"
"Example using a LLaMA v1 model"
]
},
{
@@ -381,7 +383,7 @@
"source": [
"### GPU\n",
"\n",
"If the installation with BLAS backend was correct, you will see an `BLAS = 1` indicator in model properties.\n",
"If the installation with BLAS backend was correct, you will see a `BLAS = 1` indicator in model properties.\n",
"\n",
"Two of the most important parameters for use with GPU are:\n",
"\n",
@@ -473,22 +475,15 @@
"llm_chain.run(question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Metal\n",
"\n",
"If the installation with Metal was correct, you will see an `NEON = 1` indicator in model properties.\n",
"If the installation with Metal was correct, you will see a `NEON = 1` indicator in model properties.\n",
"\n",
"Two of the most important parameters for use with GPU are:\n",
"Two of the most important GPU parameters are:\n",
"\n",
"- `n_gpu_layers` - determines how many layers of the model are offloaded to your Metal GPU, in the most case, set it to `1` is enough for Metal\n",
"- `n_batch` - how many tokens are processed in parallel, default is 8, set to bigger number.\n",
@@ -522,7 +517,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The rest are almost same as GPU, the console log will show the following log to indicate the Metal was enable properly.\n",
"The console log will show the following log to indicate Metal was enable properly.\n",
"\n",
"```\n",
"ggml_metal_init: allocating\n",
@@ -530,7 +525,9 @@
"...\n",
"```\n",
"\n",
"You also could check the `Activity Monitor` by watching the % GPU of the process, the % CPU will drop dramatically after turn on `n_gpu_layers=1`. Also for the first time call LLM, the performance might be slow due to the model compilation in Metal GPU."
"You also could check `Activity Monitor` by watching the GPU usage of the process, the CPU usage will drop dramatically after turn on `n_gpu_layers=1`. \n",
"\n",
"For the first call to the LLM, the performance may be slow due to the model compilation in Metal GPU."
]
}
],

View File

@@ -0,0 +1,340 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Ollama\n",
"\n",
"[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.\n",
"\n",
"Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. \n",
"\n",
"It optimizes setup and configuration details, including GPU usage.\n",
"\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://github.com/jmorganca/ollama#model-library).\n",
"\n",
"## Setup\n",
"\n",
"First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n",
"\n",
"* [Download](https://ollama.ai/download)\n",
"* Fetch a model, e.g., `Llama-7b`: `ollama pull llama2`\n",
"* Run `ollama run llama2`\n",
"\n",
"\n",
"## Usage\n",
"\n",
"You can see a full list of supported parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html)."
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import Ollama\n",
"from langchain.callbacks.manager import CallbackManager\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler \n",
"llm = Ollama(base_url=\"http://localhost:11434\", \n",
" model=\"llama2\", \n",
" callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With `StreamingStdOutCallbackHandler`, you will see tokens streamed."
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Great! The history of Artificial Intelligence (AI) is a fascinating and complex topic that spans several decades. Here's a brief overview:\n",
"\n",
"1. Early Years (1950s-1960s): The term \"Artificial Intelligence\" was coined in 1956 by computer scientist John McCarthy. However, the concept of AI dates back to ancient Greece, where mythical creatures like Talos and Hephaestus were created to perform tasks without any human intervention. In the 1950s and 1960s, researchers began exploring ways to replicate human intelligence using computers, leading to the development of simple AI programs like ELIZA (1966) and PARRY (1972).\n",
"2. Rule-Based Systems (1970s-1980s): As computing power increased, researchers developed rule-based systems, such as Mycin (1976), which could diagnose medical conditions based on a set of rules. This period also saw the rise of expert systems, like EDICT (1985), which mimicked human experts in specific domains.\n",
"3. Machine Learning (1990s-2000s): With the advent of big data and machine learning algorithms, AI evolved to include neural networks, decision trees, and other techniques for training models on large datasets. This led to the development of applications like speech recognition (e.g., Siri, Alexa), image recognition (e.g., Google Image Search), and natural language processing (e.g., chatbots).\n",
"4. Deep Learning (2010s-present): The rise of deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), has enabled AI to perform complex tasks like image and speech recognition, natural language processing, and even autonomous driving. Companies like Google, Facebook, and Baidu have invested heavily in deep learning research, leading to breakthroughs in areas like facial recognition, object detection, and machine translation.\n",
"5. Current Trends (present-future): AI is currently being applied to various industries, including healthcare, finance, education, and entertainment. With the growth of cloud computing, edge AI, and autonomous systems, we can expect to see more sophisticated AI applications in the near future. However, there are also concerns about the ethical implications of AI, such as data privacy, algorithmic bias, and job displacement.\n",
"\n",
"Remember, AI has a long history, and its development is an ongoing process. As technology advances, we can expect to see even more innovative applications of AI in various fields."
]
},
{
"data": {
"text/plain": [
"'\\nGreat! The history of Artificial Intelligence (AI) is a fascinating and complex topic that spans several decades. Here\\'s a brief overview:\\n\\n1. Early Years (1950s-1960s): The term \"Artificial Intelligence\" was coined in 1956 by computer scientist John McCarthy. However, the concept of AI dates back to ancient Greece, where mythical creatures like Talos and Hephaestus were created to perform tasks without any human intervention. In the 1950s and 1960s, researchers began exploring ways to replicate human intelligence using computers, leading to the development of simple AI programs like ELIZA (1966) and PARRY (1972).\\n2. Rule-Based Systems (1970s-1980s): As computing power increased, researchers developed rule-based systems, such as Mycin (1976), which could diagnose medical conditions based on a set of rules. This period also saw the rise of expert systems, like EDICT (1985), which mimicked human experts in specific domains.\\n3. Machine Learning (1990s-2000s): With the advent of big data and machine learning algorithms, AI evolved to include neural networks, decision trees, and other techniques for training models on large datasets. This led to the development of applications like speech recognition (e.g., Siri, Alexa), image recognition (e.g., Google Image Search), and natural language processing (e.g., chatbots).\\n4. Deep Learning (2010s-present): The rise of deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), has enabled AI to perform complex tasks like image and speech recognition, natural language processing, and even autonomous driving. Companies like Google, Facebook, and Baidu have invested heavily in deep learning research, leading to breakthroughs in areas like facial recognition, object detection, and machine translation.\\n5. Current Trends (present-future): AI is currently being applied to various industries, including healthcare, finance, education, and entertainment. With the growth of cloud computing, edge AI, and autonomous systems, we can expect to see more sophisticated AI applications in the near future. However, there are also concerns about the ethical implications of AI, such as data privacy, algorithmic bias, and job displacement.\\n\\nRemember, AI has a long history, and its development is an ongoing process. As technology advances, we can expect to see even more innovative applications of AI in various fields.'"
]
},
"execution_count": 40,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm(\"Tell me about the history of AI\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## RAG\n",
"\n",
"We can use Olama with RAG, [just as shown here](https://python.langchain.com/docs/use_cases/question_answering/how_to/local_retrieval_qa).\n",
"\n",
"Let's use the 13b model:\n",
"\n",
"```\n",
"ollama pull llama2:13b\n",
"ollama run llama2:13b \n",
"```\n",
"\n",
"Let's also use local embeddings from `GPT4AllEmbeddings` and `Chroma`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"! pip install gpt4all chromadb"
]
},
{
"cell_type": "code",
"execution_count": 60,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import WebBaseLoader\n",
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
"data = loader.load()\n",
"\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)"
]
},
{
"cell_type": "code",
"execution_count": 61,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found model file at /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin\n"
]
}
],
"source": [
"from langchain.vectorstores import Chroma\n",
"from langchain.embeddings import GPT4AllEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings())"
]
},
{
"cell_type": "code",
"execution_count": 62,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"4"
]
},
"execution_count": 62,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"question = \"What are the approaches to Task Decomposition?\"\n",
"docs = vectorstore.similarity_search(question)\n",
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain import PromptTemplate\n",
"\n",
"# Prompt\n",
"template = \"\"\"Use the following pieces of context to answer the question at the end. \n",
"If you don't know the answer, just say that you don't know, don't try to make up an answer. \n",
"Use three sentences maximum and keep the answer as concise as possible. \n",
"{context}\n",
"Question: {question}\n",
"Helpful Answer:\"\"\"\n",
"QA_CHAIN_PROMPT = PromptTemplate(\n",
" input_variables=[\"context\", \"question\"],\n",
" template=template,\n",
")\n"
]
},
{
"cell_type": "code",
"execution_count": 69,
"metadata": {},
"outputs": [],
"source": [
"# LLM\n",
"from langchain.llms import Ollama\n",
"from langchain.callbacks.manager import CallbackManager\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"llm = Ollama(base_url=\"http://localhost:11434\",\n",
" model=\"llama2\",\n",
" verbose=True,\n",
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))"
]
},
{
"cell_type": "code",
"execution_count": 66,
"metadata": {},
"outputs": [],
"source": [
"# QA chain\n",
"from langchain.chains import RetrievalQA\n",
"qa_chain = RetrievalQA.from_chain_type(\n",
" llm,\n",
" retriever=vectorstore.as_retriever(),\n",
" chain_type_kwargs={\"prompt\": QA_CHAIN_PROMPT},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 70,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Task decomposition can be approached in different ways for AI agents, including:\n",
"\n",
"1. Using simple prompts like \"Steps for XYZ.\" or \"What are the subgoals for achieving XYZ?\" to guide the LLM.\n",
"2. Providing task-specific instructions, such as \"Write a story outline\" for writing a novel.\n",
"3. Utilizing human inputs to help the AI agent understand the task and break it down into smaller steps."
]
}
],
"source": [
"question = \"What are the various approaches to Task Decomposition for AI Agents?\"\n",
"result = qa_chain({\"query\": question})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also get logging for tokens."
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Task decomposition can be approached in three ways: (1) using simple prompting like \"Steps for XYZ.\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions, or (3) with human inputs.{'model': 'llama2', 'created_at': '2023-08-08T04:01:09.005367Z', 'done': True, 'context': [1, 29871, 1, 13, 9314, 14816, 29903, 6778, 13, 13, 3492, 526, 263, 8444, 29892, 3390, 1319, 322, 15993, 20255, 29889, 29849, 1234, 408, 1371, 3730, 408, 1950, 29892, 1550, 1641, 9109, 29889, 3575, 6089, 881, 451, 3160, 738, 10311, 1319, 29892, 443, 621, 936, 29892, 11021, 391, 29892, 7916, 391, 29892, 304, 27375, 29892, 18215, 29892, 470, 27302, 2793, 29889, 3529, 9801, 393, 596, 20890, 526, 5374, 635, 443, 5365, 1463, 322, 6374, 297, 5469, 29889, 13, 13, 3644, 263, 1139, 947, 451, 1207, 738, 4060, 29892, 470, 338, 451, 2114, 1474, 16165, 261, 296, 29892, 5649, 2020, 2012, 310, 22862, 1554, 451, 1959, 29889, 960, 366, 1016, 29915, 29873, 1073, 278, 1234, 304, 263, 1139, 29892, 3113, 1016, 29915, 29873, 6232, 2089, 2472, 29889, 13, 13, 29966, 829, 14816, 29903, 6778, 13, 13, 29961, 25580, 29962, 4803, 278, 1494, 12785, 310, 3030, 304, 1234, 278, 1139, 472, 278, 1095, 29889, 29871, 13, 3644, 366, 1016, 29915, 29873, 1073, 278, 1234, 29892, 925, 1827, 393, 366, 1016, 29915, 29873, 1073, 29892, 1016, 29915, 29873, 1018, 304, 1207, 701, 385, 1234, 29889, 29871, 13, 11403, 2211, 25260, 7472, 322, 3013, 278, 1234, 408, 3022, 895, 408, 1950, 29889, 29871, 13, 5398, 26227, 508, 367, 2309, 313, 29896, 29897, 491, 365, 26369, 411, 2560, 9508, 292, 763, 376, 7789, 567, 363, 1060, 29979, 29999, 7790, 29876, 29896, 19602, 376, 5618, 526, 278, 1014, 1484, 1338, 363, 3657, 15387, 1060, 29979, 29999, 29973, 613, 313, 29906, 29897, 491, 773, 3414, 29899, 14940, 11994, 29936, 321, 29889, 29887, 29889, 376, 6113, 263, 5828, 27887, 1213, 363, 5007, 263, 9554, 29892, 470, 313, 29941, 29897, 411, 5199, 10970, 29889, 13, 13, 5398, 26227, 508, 367, 2309, 313, 29896, 29897, 491, 365, 26369, 411, 2560, 9508, 292, 763, 376, 7789, 567, 363, 1060, 29979, 29999, 7790, 29876, 29896, 19602, 376, 5618, 526, 278, 1014, 1484, 1338, 363, 3657, 15387, 1060, 29979, 29999, 29973, 613, 313, 29906, 29897, 491, 773, 3414, 29899, 14940, 11994, 29936, 321, 29889, 29887, 29889, 376, 6113, 263, 5828, 27887, 1213, 363, 5007, 263, 9554, 29892, 470, 313, 29941, 29897, 411, 5199, 10970, 29889, 13, 13, 5398, 26227, 508, 367, 2309, 313, 29896, 29897, 491, 365, 26369, 411, 2560, 9508, 292, 763, 376, 7789, 567, 363, 1060, 29979, 29999, 7790, 29876, 29896, 19602, 376, 5618, 526, 278, 1014, 1484, 1338, 363, 3657, 15387, 1060, 29979, 29999, 29973, 613, 313, 29906, 29897, 491, 773, 3414, 29899, 14940, 11994, 29936, 321, 29889, 29887, 29889, 376, 6113, 263, 5828, 27887, 1213, 363, 5007, 263, 9554, 29892, 470, 313, 29941, 29897, 411, 5199, 10970, 29889, 13, 13, 1451, 16047, 267, 297, 1472, 29899, 8489, 18987, 322, 3414, 26227, 29901, 1858, 9450, 975, 263, 3309, 29891, 4955, 322, 17583, 3902, 8253, 278, 1650, 2913, 3933, 18066, 292, 29889, 365, 26369, 29879, 21117, 304, 10365, 13900, 746, 20050, 411, 15668, 4436, 29892, 3907, 963, 3109, 16424, 9401, 304, 25618, 1058, 5110, 515, 14260, 322, 1059, 29889, 13, 16492, 29901, 1724, 526, 278, 13501, 304, 9330, 897, 510, 3283, 29973, 13, 29648, 1319, 673, 29901, 518, 29914, 25580, 29962, 13, 5398, 26227, 508, 367, 26733, 297, 2211, 5837, 29901, 313, 29896, 29897, 773, 2560, 9508, 292, 763, 376, 7789, 567, 363, 1060, 29979, 29999, 7790, 29876, 29896, 19602, 376, 5618, 526, 278, 1014, 1484, 1338, 363, 3657, 15387, 1060, 29979, 29999, 29973, 613, 313, 29906, 29897, 491, 773, 3414, 29899, 14940, 11994, 29892, 470, 313, 29941, 29897, 411, 5199, 10970, 29889, 2], 'total_duration': 1364428708, 'load_duration': 1246375, 'sample_count': 62, 'sample_duration': 44859000, 'prompt_eval_count': 1, 'eval_count': 62, 'eval_duration': 1313002000}\n"
]
}
],
"source": [
"from langchain.schema import LLMResult\n",
"from langchain.callbacks.base import BaseCallbackHandler\n",
"\n",
"class GenerationStatisticsCallback(BaseCallbackHandler):\n",
" def on_llm_end(self, response: LLMResult, **kwargs) -> None:\n",
" print(response.generations[0][0].generation_info)\n",
" \n",
"callback_manager = CallbackManager([StreamingStdOutCallbackHandler(), GenerationStatisticsCallback()])\n",
"\n",
"llm = Ollama(base_url=\"http://localhost:11434\",\n",
" model=\"llama2\",\n",
" verbose=True,\n",
" callback_manager=callback_manager)\n",
"\n",
"qa_chain = RetrievalQA.from_chain_type(\n",
" llm,\n",
" retriever=vectorstore.as_retriever(),\n",
" chain_type_kwargs={\"prompt\": QA_CHAIN_PROMPT},\n",
")\n",
"\n",
"question = \"What are the approaches to Task Decomposition?\"\n",
"result = qa_chain({\"query\": question})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`eval_count` / (`eval_duration`/10e9) gets `tok / s`"
]
},
{
"cell_type": "code",
"execution_count": 57,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"47.22003469910937"
]
},
"execution_count": 57,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"62 / (1313002000/1000/1000/1000)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,106 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "9597802c",
"metadata": {},
"source": [
"# Nebula\n",
"\n",
"[Nebula](https://symbl.ai/nebula/) is a fully-managed Conversation platform, on which you can build, deploy, and manage scalable AI applications.\n",
"\n",
"This example goes over how to use LangChain to interact with the [Nebula platform](https://docs.symbl.ai/docs/nebula-llm-overview). \n",
"\n",
"It will send the requests to Nebula Service endpoint, which concatenates `SYMBLAI_NEBULA_SERVICE_URL` and `SYMBLAI_NEBULA_SERVICE_PATH`, with a token defined in `SYMBLAI_NEBULA_SERVICE_TOKEN`"
]
},
{
"cell_type": "markdown",
"id": "f15ebe0d",
"metadata": {},
"source": [
"### Integrate with a LLMChain"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5472a7cd-af26-48ca-ae9b-5f6ae73c74d2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"NEBULA_SERVICE_URL\"] = NEBULA_SERVICE_URL\n",
"os.environ[\"NEBULA_SERVICE_PATH\"] = NEBULA_SERVICE_PATH\n",
"os.environ[\"NEBULA_SERVICE_API_KEY\"] = NEBULA_SERVICE_API_KEY"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6fb585dd",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.llms import OpenLLM\n",
"\n",
"llm = OpenLLM(\n",
" conversation=\"<Drop your text conversation that you want to ask Nebula to analyze here>\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "035dea0f",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain import PromptTemplate, LLMChain\n",
"\n",
"template = \"Identify the {count} main objectives or goals mentioned in this context concisely in less points. Emphasize on key intents.\"\n",
"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"count\"])\n",
"\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"generated = llm_chain.run(count=\"five\")\n",
"print(generated)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
},
"vscode": {
"interpreter": {
"hash": "a0a0263b650d907a3bfe41c0f8d6a63a071b884df3cfdc1579f00cdc1aed6b03"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,169 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Titan Takeoff\n",
"\n",
"TitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform. \n",
"\n",
"Our inference server, [Titan Takeoff](https://docs.titanml.co/docs/titan-takeoff/getting-started) enables deployment of LLMs locally on your hardware in a single command. Most generative model architectures are supported, such as Falcon, Llama 2, GPT2, T5 and many more."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"To get started with Iris Takeoff, all you need is to have docker and python installed on your local system. If you wish to use the server with gpu suport, then you will need to install docker with cuda support.\n",
"\n",
"For Mac and Windows users, make sure you have the docker daemon running! You can check this by running docker ps in your terminal. To start the daemon, open the docker desktop app.\n",
"\n",
"Run the following command to install the Iris CLI that will enable you to run the takeoff server:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install titan-iris"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Choose a Model\n",
"Iris Takeoff supports many of the most powerful generative text models, such as Falcon, MPT, and Llama. See the [supported models](https://docs.titanml.co/docs/titan-takeoff/supported-models) for more information. For information about using your own models, see the [custom models](https://docs.titanml.co/docs/titan-takeoff/Advanced/custom-models).\n",
"\n",
"Going forward in this demo we will be using the falcon 7B instruct model. This is a good open source model that is trained to follow instructions, and is small enough to easily inference even on CPUs.\n",
"\n",
"## Taking off\n",
"Models are referred to by their model id on HuggingFace. Takeoff uses port 8000 by default, but can be configured to use another port. There is also support to use a Nvidia GPU by specifing cuda for the device flag.\n",
"\n",
"To start the takeoff server, run:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"iris takeoff --model tiiuae/falcon-7b-instruct --device cpu\n",
"iris takeoff --model tiiuae/falcon-7b-instruct --device cuda # Nvidia GPU required\n",
"iris takeoff --model tiiuae/falcon-7b-instruct --device cpu --port 5000 # run on port 5000 (default: 8000)\n",
"```"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You will then be directed to a login page, where you will need to create an account to proceed.\n",
"After logging in, run the command onscreen to check whether the server is ready. When it is ready, you can start using the Takeoff integration\n",
"\n",
"## Inferencing your model\n",
"To access your LLM, use the TitanTakeoff LLM wrapper:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import TitanTakeoff\n",
"\n",
"llm = TitanTakeoff(\n",
" port=8000,\n",
" generate_max_length=128,\n",
" temperature=1.0\n",
")\n",
"\n",
"prompt = \"What is the largest planet in the solar system?\"\n",
"\n",
"llm(prompt)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"No parameters are needed by default, but a port can be specified and [generation parameters](https://docs.titanml.co/docs/titan-takeoff/Advanced/generation-parameters) can be supplied.\n",
"\n",
"### Streaming\n",
"Streaming is also supported via the streaming flag:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain.callbacks.manager import CallbackManager\n",
"\n",
"llm = TitanTakeoff(port=8000, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), streaming=True)\n",
"\n",
"prompt = \"What is the capital of France?\"\n",
"\n",
"llm(prompt)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Integration with LLMChain"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain import PromptTemplate, LLMChain\n",
"\n",
"llm = TitanTakeoff()\n",
"\n",
"template = \"What is the capital of {country}\"\n",
"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"country\"])\n",
"\n",
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
"\n",
"generated = llm_chain.run(country=\"Belgium\")\n",
"print(generated)"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,196 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "499c3142-2033-437d-a60a-731988ac6074",
"metadata": {},
"source": [
"# vLLM\n",
"\n",
"[vLLM](https://vllm.readthedocs.io/en/latest/index.html) is a fast and easy-to-use library for LLM inference and serving, offering:\n",
"* State-of-the-art serving throughput \n",
"* Efficient management of attention key and value memory with PagedAttention\n",
"* Continuous batching of incoming requests\n",
"* Optimized CUDA kernels\n",
"\n",
"This notebooks goes over how to use a LLM with langchain and vLLM.\n",
"\n",
"To use, you should have the `vllm` python package installed."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "8a3f2666-5c75-4797-967a-7915a247bf33",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"#!pip install vllm -q"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "84e350f7-21f6-455b-b1f0-8b0116a2fd49",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO 08-06 11:37:33 llm_engine.py:70] Initializing an LLM engine with config: model='mosaicml/mpt-7b', tokenizer='mosaicml/mpt-7b', tokenizer_mode=auto, trust_remote_code=True, dtype=torch.bfloat16, use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=1, seed=0)\n",
"INFO 08-06 11:37:41 llm_engine.py:196] # GPU blocks: 861, # CPU blocks: 512\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 2.00it/s]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"What is the capital of France ? The capital of France is Paris.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"from langchain.llms import VLLM\n",
"\n",
"llm = VLLM(model=\"mosaicml/mpt-7b\",\n",
" trust_remote_code=True, # mandatory for hf models\n",
" max_new_tokens=128,\n",
" top_k=10,\n",
" top_p=0.95,\n",
" temperature=0.8,\n",
")\n",
"\n",
"print(llm(\"What is the capital of France ?\"))"
]
},
{
"cell_type": "markdown",
"id": "94a3b41d-8329-4f8f-94f9-453d7f132214",
"metadata": {},
"source": [
"## Integrate the model in an LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5605b7a1-fa63-49c1-934d-8b4ef8d71dd5",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Processed prompts: 100%|██████████| 1/1 [00:01<00:00, 1.34s/it]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"1. The first Pokemon game was released in 1996.\n",
"2. The president was Bill Clinton.\n",
"3. Clinton was president from 1993 to 2001.\n",
"4. The answer is Clinton.\n",
"\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"from langchain import PromptTemplate, LLMChain\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"question = \"Who was the US president in the year the first Pokemon game was released?\"\n",
"\n",
"print(llm_chain.run(question))"
]
},
{
"cell_type": "markdown",
"id": "56826aba-d08b-4838-8bfa-ca96e463b25d",
"metadata": {},
"source": [
"## Distributed Inference\n",
"\n",
"vLLM supports distributed tensor-parallel inference and serving. \n",
"\n",
"To run multi-GPU inference with the LLM class, set the `tensor_parallel_size` argument to the number of GPUs you want to use. For example, to run inference on 4 GPUs"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f8c25c35-47b5-459d-9985-3cf546e9ac16",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import VLLM\n",
"\n",
"llm = VLLM(model=\"mosaicml/mpt-30b\",\n",
" tensor_parallel_size=4,\n",
" trust_remote_code=True, # mandatory for hf models\n",
")\n",
"\n",
"llm(\"What is the future of AI?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_pytorch_p310",
"language": "python",
"name": "conda_pytorch_p310"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,67 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Rockset Chat Message History\n",
"\n",
"This notebook goes over how to use [Rockset](https://rockset.com/docs) to store chat message history. \n",
"\n",
"To begin, with get your API key from the [Rockset console](https://console.rockset.com/apikeys). Find your API region for the Rockset [API reference](https://rockset.com/docs/rest-api#introduction)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain.memory.chat_message_histories import RocksetChatMessageHistory\n",
"from rockset import RocksetClient, Regions\n",
"\n",
"history = RocksetChatMessageHistory(\n",
" session_id=\"MySession\",\n",
" client=RocksetClient(\n",
" api_key=\"YOUR API KEY\", \n",
" host=Regions.usw2a1 # us-west-2 Oregon\n",
" ),\n",
" collection=\"langchain_demo\",\n",
" sync=True\n",
")\n",
"history.add_user_message(\"hi!\")\n",
"history.add_ai_message(\"whats up?\")\n",
"print(history.messages)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The output should be something like:\n",
"\n",
"```python\n",
"[\n",
" HumanMessage(content='hi!', additional_kwargs={'id': '2e62f1c2-e9f7-465e-b551-49bae07fe9f0'}, example=False), \n",
" AIMessage(content='whats up?', additional_kwargs={'id': 'b9be8eda-4c18-4cf8-81c3-e91e876927d0'}, example=False)\n",
"]\n",
"\n",
"```"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,154 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "91c6a7ef",
"metadata": {},
"source": [
"# Streamlit Chat Message History\n",
"\n",
"This notebook goes over how to store and use chat message history in a Streamlit app. StreamlitChatMessageHistory will store messages in\n",
"[Streamlit session state](https://docs.streamlit.io/library/api-reference/session-state)\n",
"at the specified `key=`. The default key is `\"langchain_messages\"`.\n",
"\n",
"- Note, StreamlitChatMessageHistory only works when run in a Streamlit app.\n",
"- You may also be interested in [StreamlitCallbackHandler](/docs/integrations/callbacks/streamlit) for LangChain.\n",
"- For more on Streamlit check out their\n",
"[getting started documentation](https://docs.streamlit.io/library/get-started).\n",
"\n",
"You can see the [full app example running here](https://langchain-st-memory.streamlit.app/), and more examples in\n",
"[github.com/langchain-ai/streamlit-agent](https://github.com/langchain-ai/streamlit-agent)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d15e3302",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import StreamlitChatMessageHistory\n",
"\n",
"history = StreamlitChatMessageHistory(key=\"chat_messages\")\n",
"\n",
"history.add_user_message(\"hi!\")\n",
"history.add_ai_message(\"whats up?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "64fc465e",
"metadata": {},
"outputs": [],
"source": [
"history.messages"
]
},
{
"cell_type": "markdown",
"id": "b60dc735",
"metadata": {},
"source": [
"You can integrate StreamlitChatMessageHistory into ConversationBufferMemory and chains or agents as usual. The history will be persisted across re-runs of the Streamlit app within a given user session. A given StreamlitChatMessageHistory will NOT be persisted or shared across user sessions."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "42ab5bf3",
"metadata": {},
"outputs": [],
"source": [
"\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain.memory.chat_message_histories import StreamlitChatMessageHistory\n",
"\n",
"# Optionally, specify your own session_state key for storing messages\n",
"msgs = StreamlitChatMessageHistory(key=\"special_app_key\")\n",
"\n",
"memory = ConversationBufferMemory(memory_key=\"history\", chat_memory=msgs)\n",
"if len(msgs.messages) == 0:\n",
" msgs.add_ai_message(\"How can I help you?\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a29252de",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.prompts import PromptTemplate\n",
"template = \"\"\"You are an AI chatbot having a conversation with a human.\n",
"\n",
"{history}\n",
"Human: {human_input}\n",
"AI: \"\"\"\n",
"prompt = PromptTemplate(input_variables=[\"history\", \"human_input\"], template=template)\n",
"\n",
"# Add the memory to an LLMChain as usual\n",
"llm_chain = LLMChain(llm=OpenAI(), prompt=prompt, memory=memory)"
]
},
{
"cell_type": "markdown",
"id": "7cd99b4b",
"metadata": {},
"source": [
"Conversational Streamlit apps will often re-draw each previous chat message on every re-run. This is easy to do by iterating through `StreamlitChatMessageHistory.messages`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3bdb637b",
"metadata": {},
"outputs": [],
"source": [
"import streamlit as st\n",
"\n",
"for msg in msgs.messages:\n",
" st.chat_message(msg.type).write(msg.content)\n",
"\n",
"if prompt := st.chat_input():\n",
" st.chat_message(\"human\").write(prompt)\n",
"\n",
" # As usual, new messages are added to StreamlitChatMessageHistory when the Chain is called.\n",
" response = llm_chain.run(prompt)\n",
" st.chat_message(\"ai\").write(response)"
]
},
{
"cell_type": "markdown",
"id": "7adaf3d6",
"metadata": {},
"source": [
"**[View the final app](https://langchain-st-memory.streamlit.app/).**"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,21 @@
# BagelDB
> [BagelDB](https://www.bageldb.ai/) (`Open Vector Database for AI`), is like GitHub for AI data.
It is a collaborative platform where users can create,
share, and manage vector datasets. It can support private projects for independent developers,
internal collaborations for enterprises, and public contributions for data DAOs.
## Installation and Setup
```bash
pip install betabageldb
```
## VectorStore
See a [usage example](/docs/integrations/vectorstores/bageldb).
```python
from langchain.vectorstores import Bagel
```

Some files were not shown because too many files have changed in this diff Show More