Compare commits

...

361 Commits

Author SHA1 Message Date
Erick Friis
1224ac8700 infra: run codespell on v0.1 prs (#21545) 2024-05-10 12:52:14 -07:00
Erick Friis
cfd827b574 docs: announcement bar (#21511) 2024-05-10 10:10:52 -07:00
Erick Friis
a71c4e22d1 docs: 404 page top level (#21510) 2024-05-09 17:14:56 -07:00
Erick Friis
29be604023 Revert "docs: redirect base slug (0.1)" (#21500)
Reverts langchain-ai/langchain#21458
2024-05-09 12:21:25 -07:00
Erick Friis
25f8467ffb docs: redirect base slug (0.1) (#21458) 2024-05-09 10:52:17 -07:00
Erick Friis
0c84aea1d4 docs: spell correction (#21459) 2024-05-09 10:51:04 -07:00
Erick Friis
d0f043a765 docs: baseUrl for ganalytics (0.1) (#21456) 2024-05-08 18:07:10 -07:00
Erick Friis
264f677528 langchain: bump core community (#21453) 2024-05-08 15:58:01 -07:00
Erick Friis
3b99f428a0 community: fix core version str (#21452) 2024-05-08 15:36:12 -07:00
Erick Friis
6eab0a4709 community: release 0.0.38 (#21451) 2024-05-08 15:33:08 -07:00
Erick Friis
be8006fa9c langchain: fix core version dep (#21449) 2024-05-08 15:08:55 -07:00
Erick Friis
e316d30dcb langchain: release 0.1.19 (#21447) 2024-05-08 14:52:03 -07:00
Erick Friis
61b5159ed0 docs: v0.1 build semantics [BEGIN v0.1 BRANCH] (#21444) 2024-05-08 14:29:49 -07:00
Tommi Holmgren
ee35b9ba56 langchain-robocorp: remove toolkit return content max length (#21436)
Robocorp (action server) toolkit had a limitation that the content
length returned by the tool was always cut to max 5000 chars. This was
from the time when context windows were much more limited.

This PR removes the limitation. Whatever the underlying tool provides
gets sent back to the agent.

As the robocorp toolkit no longer restricts the content, the implication
is that either the Action (tool) developer or the agent developer needs
to be aware of potentially oversized tool responses. Our point of view
is this should be the agent developer's responsibility, them being in
control of the use case and aware of the context window the LLM has.
2024-05-08 15:05:55 -04:00
JuHyung Son
710e57d779 upstage: deprecate UPSTAGE_DOCUMENT_AI_API_KEY (#21363)
Description: We are merging UPSTAGE_DOCUMENT_AI_API_KEY and
UPSTAGE_API_KEY into one, and only UPSTAGE_API_KEY will be used going
forward. And we changed the base class of ChatUpstage to BaseChatOpenAI.

---------

Co-authored-by: Sean <chosh0615@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-08 18:02:26 +00:00
Erick Friis
6a295d1ec0 upstage: release 0.1.4 (#21432) 2024-05-08 17:57:40 +00:00
Mateusz Szewczyk
7926cc1929 ibm: Fix llm and embeddings "verify" attribute default value (#21429)
Thank you for contributing to LangChain!

- [x] **PR title**: "langchain-ibm: Fix llm and embeddings 'verify'
attribute default value"


- [x] **PR message**: 
    - **Description:** fix default value of "verify" attribute
    - **Dependencies:** `ibm_watsonx_ai`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-08 17:23:14 +00:00
Kevin Zhang
0715545378 docs: fix typo in text (#21393)
**Description:** The previous text had an unclosed parenthesis, this fix
adds the closing parenthesis
2024-05-08 15:58:15 +00:00
Dobiichi-Origami
5b00885b49 community: add bind_tools and with_structured_output support to QianfanChatEndpoint (#21412)
…Endpoint`

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** add `bind_tools` and `with_structured_output` support
to `QianfanChatEndpoint`


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-05-08 11:35:10 -04:00
Silas Xu
aafaf3e193 The predict_and_parse is deprecated, instead pass an output parser directly to LLMChain. (#20130)
The `predict_and_parse` method is deprecated, instead pass an output
parser directly to LLMChain.

- [x] **PR title**: "langchain: update chain_extract.py"


![image](https://github.com/langchain-ai/langchain/assets/40889019/e950d79f-5a0f-4086-86e9-89f627990fe5)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-08 09:32:17 -04:00
ccurme
3c31bd0ed0 langchain: update use of predict_and_parse in LLMChainFilter (#21389)
Following https://github.com/langchain-ai/langchain/pull/20130

Removes deprecation warnings in docs here:
https://python.langchain.com/docs/modules/data_connection/retrievers/contextual_compression/

Tested using the same docs notebook + existing integration test.
2024-05-08 09:31:33 -04:00
Tomaz Bratanic
dd70f2f473 Update graph docs (#21414)
Update the deprecated docs and added node properties to graph
construction
2024-05-08 09:05:39 -04:00
Erick Friis
bbdf0f8801 experimental[patch]: core and langchain dep (#21402) 2024-05-07 21:39:34 -07:00
Erick Friis
e4aca0d052 experimental[patch]: release 0.0.58 (#21397) 2024-05-08 03:52:44 +00:00
Erick Friis
893f06b5de infra: rewrite ipynb links to md (#21392) 2024-05-07 23:16:52 +00:00
Hassan El Mghari
225ceedcb6 docs: Add together docs in chat models & update provider docs (#21391)
- Added Together docs in chat models section
- Update Together provider docs to match the LLM & chat models sections

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-07 22:40:57 +00:00
Heidi Steen
af97d58c9e docs: update docs/integrations/retrievers/azure_ai_search.ipynb (#21160)
This is a doc update. It fixes up formatting and product name
references. The example code is updated to use a local built-in text
file.

@mmhangami Please take a look

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-05-07 22:33:46 +00:00
snova-jamesv
ca753e7c15 community: updated performance limitation wording in sambanova.ipynb (#21390)
- **Description:** updated performance limitation wording in
sambanova.ipynb
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA
2024-05-07 22:21:46 +00:00
Leonid Ganeline
791d59a2c8 community: callbacks guard_imports (#21173)
Issue: we have several helper functions to import third-party libraries
like import_uptrain in
[community.callbacks](https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.uptrain_callback.import_uptrain.html#langchain_community.callbacks.uptrain_callback.import_uptrain).
And we have core.utils.utils.guard_import that works exactly for this
purpose.
The import_<package> functions work inconsistently and rather be private
functions.
Change: replaced these functions with the guard_import function.

Related to #21133
2024-05-07 15:04:54 -07:00
Hassan El Mghari
416549bed2 docs: Updated Together integration docs (#21388)
**Description:** Updated the together integration docs by leading with
the streaming example, explicitly specifying a model to show users how
to do that, and updating the sections to more closely match other
integrations.
2024-05-07 21:51:42 +00:00
Rahul Triptahi
7994cba18d [Community][Minor]: Fetch loader_source of GoogleDriveLoader in PebbloSafeLoader. (#21314)
Description: This PR includes fix for loader_source to be fetched from
metadata in case of GdriveLoaders.
Documentation: NA
Unit Test: NA

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-05-07 14:45:58 -07:00
Leonid Ganeline
7cbf1c31aa docs: table legend updated (#21351)
Compacted the table column legends. Added links. Similar to #21259
2024-05-07 14:45:04 -07:00
Erick Friis
d5bde4fa91 infra: use nbconvert for docs build (#21135)
todo

- [x] remove quarto build semantics
- [x] remove quarto download/install
- [x] make `uv` not verbose
2024-05-07 12:30:17 -07:00
Nuno Campos
ad0f3c14c2 core: allow mermaid node labels to have any characters (#21385)
- it's only node ids that are limited

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-07 12:16:49 -07:00
Eugene Yurtsev
6a1d61dbf1 community[patch]: Fix in memory vectorstore to take into account ids when adding docs (#21384)
Should respect `ids` if passed
2024-05-07 15:05:16 -04:00
Ikko Eltociear Ashimine
80170da6c5 docs: update cassandra_database.ipynb (#21145)
Enviroment -> Environment
2024-05-07 15:00:24 -04:00
Miroslav
04e2611fea Added additional headers for HuggingFaceInferenceAPIEmbeddings endpoint. (#21282)
Thank you for contributing to LangChain!

- [ ] **HuggingFaceInferenceAPIEmbeddings**: "Additional Headers"
  - Where: langchain, community, embeddings. huggingface.py.
- Community: add additional headers when needed by custom HuggingFace
TEI embedding endpoints. HuggingFaceInferenceAPIEmbeddings"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Adding the `additional_headers` to be passed to
requests library if needed
    - **Dependencies:** none
 

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. Tested with locally available TEI endpoints with and without
`additional_headers`
  2. Example  Usage
  
```python
embeddings=HuggingFaceInferenceAPIEmbeddings(
                             api_key=MY_CUSTOM_API_KEY,
                             api_url=MY_CUSTOM_TEI_URL,
                             additional_headers={
                                "Content-Type": "application/json"
                               }
)
```

 

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Massimiliano Pronesti <massimiliano.pronesti@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-05-07 14:17:53 -04:00
Ikko Eltociear Ashimine
c34419e200 docs: update quick_start.ipynb (#21358)
initalize -> initialize



- [x] **PR title**: "package: description"
2024-05-07 08:44:48 -07:00
Guangdong Liu
1fe66f5d39 community(patch) fix MoonshotChat moonshot_api_key is invaild for api key (#21361)
Description: close
https://github.com/langchain-ai/langchain/issues/21237
@baskaryan, @eyurtsev
2024-05-07 08:44:30 -07:00
snova-jamesv
c2ed484653 community: add Sambaverse rate limitation info to sambanova.ipynb (#21379)
- **Description:** add Sambaverse rate limitation info to
sambanova.ipynb
    - **Issue:** NA
    - **Dependencies:** NA
2024-05-07 15:42:44 +00:00
Tomaz Bratanic
0bf7596839 Add simple node properties to llm graph transformer (#21369)
Add support for simple node properties in llm graph transformer.

Linter and dynamic pydantic classes aren't friends, hence I added two
ignores
2024-05-07 08:41:09 -07:00
ccurme
080af0ec53 langchain: sync -> async methods in OpenAI assistants (#21378) 2024-05-07 10:25:55 -04:00
Tomaz Bratanic
ad3fd44a7f experimental: Fix llm graph transformer bug (#21362) 2024-05-06 23:59:55 -07:00
Erick Friis
bb81ae5c8c together: fix chat model and embedding classes (#21353) 2024-05-06 18:26:03 -07:00
Hassan El Mghari
d6ef5fe86a together: add chat models, use openai base (#21337)
**Description:** Adding chat completions to the Together AI package,
which is our most popular API. Also staying backwards compatible with
the old API so folks can continue to use the completions API as well.
Also moved the embedding API to use the OpenAI library to standardize it
further.

**Twitter handle:** @nutlope

- [x] **Add tests and docs**: If you're adding a new integration, please
include
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-06 17:47:06 -07:00
Jacob Lee
a2d31307bb Adds confirmation logs after creating a new project (#12618)
@efriis @hwchase17

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-06 23:28:12 +00:00
Erick Friis
0fb93cd740 core: release 0.1.52 (#21350) 2024-05-06 22:20:35 +00:00
Wu Enze
32c61b3ece community[patch]: chat message history mypy fixes #17048 (#20114)
Relates [#17048]
Description : Applied fix to redis and neo4j file.

Error was : `Cannot override writeable attribute with read-only
property`

fix with the same solution of
[[langchain/libs/community/langchain_community/chat_message_histories/elasticsearch.py](d5c412b0a9/libs/community/langchain_community/chat_message_histories/elasticsearch.py (L170-L175))]

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 22:17:45 +00:00
nrpd25
95cc8e3fc3 premai[patch]:Standardized model init args (#21308)
[Standardized model init args
#20085](https://github.com/langchain-ai/langchain/issues/20085)
- Enable premai chat model to be initialized with `model_name` as an
alias for `model`, `api_key` as an alias for `premai_api_key`.
- Add initialization test `test_premai_initialization`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 18:12:29 -04:00
Nuno Campos
6f17158606 fix: core: Include in json output also fields set outside the constructor (#21342) 2024-05-06 14:37:36 -07:00
Tomaz Bratanic
ac14f171ac Add indexed properties to neo4j enhanced schema (#21335) 2024-05-06 14:28:34 -07:00
scaserini
a6cdf6572f community: add Kendra DocumentRelevanceOverrideConfigurations request parameter (#20695)
- **Description:** add **DocumentRelevanceOverrideConfigurations**
request parameter to Kendra retriever

Co-authored-by: Simone Caserini <simone.caserini@klarna.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-06 14:26:36 -07:00
Nuno Campos
0345bcf4ef Fix failing test for serialization (#21344)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-06 21:19:54 +00:00
Trayan Azarov
93226b1945 community: Updated Chroma version range to include 0.5.0 release (#21224)
- Updated Chroma version range to allow releases in 0.5.x.
- Bumped mypy version as linting was failing
2024-05-06 13:31:40 -07:00
Jorge Piedrahita Ortiz
e65652c3e8 community: add SambaNova embeddings integration (#21227)
- **Description:**  SambaNova hosted embeddings integration
2024-05-06 13:29:59 -07:00
Jorge Piedrahita Ortiz
df1c10260c community: minor changes sambanova integration (#21231)
- **Description:** fix: variable names in root validator not allowing
pass credentials as named parameters in llm instancing, also added
sambanova's sambaverse and sambastudio llms to __init__.py for module
import
2024-05-06 13:28:35 -07:00
Jan Soubusta
d9a61c0fa9 fix: respect table_name argument when calling from_texts (#21252)
valid for from_documents() as well

fixes #21251
2024-05-06 20:28:22 +00:00
Pedro Lima
bebf46c4a2 community: added args_schema to YahooFinanceNewsTool (#21232)
Description: this change adds args_schema (pydantic BaseModel) to
YahooFinanceNewsTool for correct schema formatting on LLM function calls

Issue: currently using YahooFinanceNewsTool with OpenAI function calling
returns the following error "TypeError("YahooFinanceNewsTool._run() got
an unexpected keyword argument '__arg1'")". This happens because the
schema sent to the LLM is "input: "{'__arg1': 'MSFT'}"" while the method
should be called with the "query" parameter.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-06 13:27:54 -07:00
Mark Cusack
060987d755 community[minor]: Add indexing via locality sensitive hashing to the Yellowbrick vector store (#20856)
- **Description:** Add LSH-based indexing to the Yellowbrick vector
store module
- **Twitter handle:** @markcusack

---------

Co-authored-by: markcusack <markcusack@markcusacksmac.lan>
Co-authored-by: markcusack <markcusack@Mark-Cusack-sMac.local>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-05-06 20:18:02 +00:00
Rashmi Pawar
a2fdabdad2 mark NemoEmbeddings as deprecated (#21239)
The NemoEmbeddings is deprecated, instead use
langchain-nvidia-ai-endpoints NVIDIAEmbeddings interface.

cc: @mattf

---------

Co-authored-by: Daniel Glogowski <167348611+dglogo@users.noreply.github.com>
Co-authored-by: andyjessen <62343929+andyjessen@users.noreply.github.com>
Co-authored-by: Chris Germann <88305668+TAAGECH9@users.noreply.github.com>
Co-authored-by: gere <gere@kapo.zh.ch>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 19:44:58 +00:00
Erick Friis
9e4b24a2d6 langchain: release 0.1.18 (#21338) 2024-05-06 19:39:46 +00:00
Erick Friis
5c000f8d79 community: release 0.0.37 (#21332) 2024-05-06 12:17:42 -07:00
Leonid Ganeline
8c13e8a79b langchain: qa_chain fix (#21279)
Issue: `load_qa_chain` is placed in the __init__.py file. As a result,
it is not listed in the API Reference docs.
BTW `load_qa_chain` is heavily presented in the doc examples, but is
missed in API Ref.
Change: moved code from init.py into a new file. Related: #21266
2024-05-06 14:45:51 -04:00
Erick Friis
7ecf9996f1 community: Revert "community: langkit dependency" (#21333)
Reverts langchain-ai/langchain#21174

Hey team - going to revert this because it doesn't seem necessary for
testing. We should only be adding optional + extended_testing
dependencies for deps that have extended tests.

otherwise it just increases probability of dependency conflicts in the
community lockfile.
2024-05-06 18:44:41 +00:00
Param Singh
fee91d43b7 baichuan[patch]:standardize chat init args (#21298)
Thank you for contributing to LangChain!

community:baichuan[patch]: standardize init args

updated `baichuan_api_key` so that aliased to `api_key`. Added test that
it continues to set the same underlying attribute. Test checks for
`SecretStr`

updated `temperature` with Pydantic Field, added unit test. 

Related to https://github.com/langchain-ai/langchain/issues/20085
2024-05-06 18:33:57 +00:00
Leonid Ganeline
62559b20b3 docs: chains page format (#21259)
Compacted the table column descriptions.
2024-05-06 11:33:38 -07:00
Christophe Bornet
484a009012 community[minor]: Relax constraints on Cassandra VectorStore constructors (#21209)
If Session and/or keyspace are not provided, they are resolved from
cassio's context. So they are not required.
This change is fully backward compatible.
2024-05-06 14:32:32 -04:00
Daniel Glogowski
27e73ebe57 docs: update nvidia docs v2 (#21288)
More doc updates por favor @baskaryan!
2024-05-06 11:29:02 -07:00
Leonid Ganeline
6feddfae88 community: langkit dependency (#21174)
Issue: the `langkit` package is not presented in the `pyproject.toml`
but it is a requirement for the `WhyLabsCallbackHandler`
Change: added `langkit`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 18:09:31 +00:00
Erick Friis
811e9cee8b core: release 0.1.51 (#21328) 2024-05-06 10:40:19 -07:00
Pengcheng Liu
144f2821af docs: add example for loading data from LarkSuite wiki. (#21311)
**Description:** Update LarkSuite loader doc to give an example for
loading data from LarkSuite wiki.
**Issue:** None
**Dependencies:** None
**Twitter handle:** None
2024-05-06 09:56:12 -07:00
Mateusz Szewczyk
682d21c3de ibm: Add support for ibm-watsonx-ai new major version (#21313)
Thank you for contributing to LangChain!

- [x] **PR title**: "langchain-ibm: Add support for ibm-watsonx-ai new
major version"


- [x] **PR message**: 
    - **Description:** Add support for ibm-watsonx-ai new major version
    - **Dependencies:** `ibm_watsonx_ai`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-06 16:48:26 +00:00
Chris Papademetrious
ee6c922c91 langchain[minor]: enhance LocalFileStore to offer update_atime parameter that updates access times on read (#20951)
**Description:**
The `LocalFileStore` class can be used to create an on-disk
`CacheBackedEmbeddings` cache. The number of files in these embeddings
caches can grow to be quite large over time (hundreds of thousands) as
embeddings are computed for new versions of content, but the embeddings
for old/deprecated content are not removed.

A *least-recently-used* (LRU) cache policy could be applied to the
`LocalFileStore` directory to delete cache entries that have not been
referenced for some time:

```bash
# delete files that have not been accessed in the last 90 days
find embeddings_cache_dir/ -atime 90 -print0 | xargs -0 rm
```

However, most filesystems in enterprise environments disable access time
modification on read to improve performance. As a result, the access
times of these cache entry files are not updated when their values are
read.

To resolve this, this pull request updates the `LocalFileStore`
constructor to offer an `update_atime` parameter that causes access
times to be updated when a cache entry is read.

For example,

```python
file_store = LocalFileStore(temp_dir, update_atime=True)
```

The default is `False`, which retains the original behavior.

**Testing:**
I updated the LocalFileStore unit tests to test the access time update.
2024-05-06 11:52:29 -04:00
Tomaz Bratanic
5b6d1a907d Add the extract types to diffbot graph transformer (#21315)
Before you could only extract triples (diffbot calls it facts) from
diffbot to avoid isolated nodes. However, sometimes isolated nodes can
still be useful like for prefiltering, so we want to allow users to
extract them if they want. Default behaviour is unchanged.
2024-05-06 09:19:52 -04:00
Jagadish Krishnamoorthy
c038991590 docs: Update pandas.ipynb (#21289)
Remove the redundant comment.
2024-05-05 20:22:17 +00:00
aditya thomas
b868c78a12 partners[anthropic]: update unit test for key passed in from the environment (#21290)
**Description:** Update unit test for ChatAnthropic
**Issue:** Test for key passed in from the environment should not have
the key initialized in the constructor
**Dependencies:** None
2024-05-05 16:19:10 -04:00
tanersekmen
d310f9c71e docs:update code structure (#21302)
update the structure of llm_chain variables

Co-authored-by: tanersemenn <0418>
2024-05-05 17:18:15 +00:00
Christophe Bornet
ba9dc04ffa docs: Add doc for hybrid search (#21245)
See
[preview](https://langchain-git-fork-cbornet-doc-hybrid-search-langchain.vercel.app/docs/use_cases/question_answering/hybrid/)

In the model of [per user
retrieval](https://python.langchain.com/docs/use_cases/question_answering/per_user/)
2024-05-04 08:22:56 -04:00
Rohan Aggarwal
8021d2a2ab community[minor]: Oraclevs integration (#21123)
Thank you for contributing to LangChain!

- Oracle AI Vector Search 
Oracle AI Vector Search is designed for Artificial Intelligence (AI)
workloads that allows you to query data based on semantics, rather than
keywords. One of the biggest benefit of Oracle AI Vector Search is that
semantic search on unstructured data can be combined with relational
search on business data in one single system. This is not only powerful
but also significantly more effective because you don't need to add a
specialized vector database, eliminating the pain of data fragmentation
between multiple systems.


- Oracle AI Vector Search is designed for Artificial Intelligence (AI)
workloads that allows you to query data based on semantics, rather than
keywords. One of the biggest benefit of Oracle AI Vector Search is that
semantic search on unstructured data can be combined with relational
search on business data in one single system. This is not only powerful
but also significantly more effective because you don't need to add a
specialized vector database, eliminating the pain of data fragmentation
between multiple systems.
This Pull Requests Adds the following functionalities
Oracle AI Vector Search : Vector Store
Oracle AI Vector Search : Document Loader
Oracle AI Vector Search : Document Splitter
Oracle AI Vector Search : Summary
Oracle AI Vector Search : Oracle Embeddings


- We have added unit tests and have our own local unit test suite which
verifies all the code is correct. We have made sure to add guides for
each of the components and one end to end guide that shows how the
entire thing runs.


- We have made sure that make format and make lint run clean.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: skmishraoracle <shailendra.mishra@oracle.com>
Co-authored-by: hroyofc <harichandan.roy@oracle.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-04 03:15:35 +00:00
ccurme
c9e9470c5a langchain: fix deprecation decorators on extraction chains (#21276)
Calling any of these raises
```
ValueError: A pending deprecation cannot have a scheduled removal
```
2024-05-03 18:29:40 -04:00
Wickes Wong
ee1adaacaa langchain[patch]: Fix summary buffer memory with return message flag (#21115)
## Description
Memory return could be set as `str` or `message` by `return_messages`
flag as mentioned in
https://python.langchain.com/docs/modules/memory/#whether-memory-is-a-string-or-a-list-of-messages,
where
`langchain.chains.conversation.memory.ConversationSummaryBufferMemory`
did not implement that.
This commit added `buffer_as_str` and `buffer_as_messages` function, and
`buffer` now affected by `return_messages` flag.

## Example Test Code and Output

```python
# Fix: ConversationSummaryBufferMemory with return_messages flag function
# Test code
from langchain.chains.conversation.memory import ConversationSummaryBufferMemory
from langchain_community.llms.ollama import Ollama

llm = Ollama()

# Create an instance of ConversationSummaryBufferMemory with return_messages set to True
memory = ConversationSummaryBufferMemory(return_messages=True, llm=llm)

# Add user and AI messages to the chat memory
memory.chat_memory.add_user_message("hi!")
memory.chat_memory.add_ai_message("what's up?")

# Print the buffer
print("Buffer:")
print(*map(type, memory.buffer), sep="\n")
print(memory.buffer, "\n")

# Print the buffer as a string
print("Buffer as String:")
print(type(memory.buffer_as_str))
print(memory.buffer_as_str, "\n")

# Print the buffer as messages
print("Buffer as Messages:")
print(*map(type, memory.buffer_as_messages), sep="\n")
print(memory.buffer_as_messages, "\n")

# Print the buffer after setting return_messages to False
memory.return_messages = False
print("Buffer after setting return_messages to False:")
print(type(memory.buffer))
print(memory.buffer, "\n")
```

```plaintext
Buffer:
<class 'langchain_core.messages.human.HumanMessage'>
<class 'langchain_core.messages.ai.AIMessage'>
[HumanMessage(content='hi!'), AIMessage(content="what's up?")] 

Buffer as String:
<class 'str'>
Human: hi!
AI: what's up? 

Buffer as Messages:
<class 'langchain_core.messages.human.HumanMessage'>
<class 'langchain_core.messages.ai.AIMessage'>
[HumanMessage(content='hi!'), AIMessage(content="what's up?")] 

Buffer after setting return_messages to False:
<class 'str'>
Human: hi!
AI: what's up? 
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-03 17:25:09 -04:00
Leonid Ganeline
9639457222 community[patch]: tools imports (#21156)
Issue: we have several helper functions to import third-party libraries
like tools.gmail.utils.import_google in
[community.tools](https://api.python.langchain.com/en/latest/community_api_reference.html#id37).
And we have core.utils.utils.guard_import that works exactly for this
purpose.
The import_<package> functions work inconsistently and rather be private
functions.
Change: replaced these functions with the guard_import function.

Related to #21133
2024-05-03 17:22:45 -04:00
Leonid Ganeline
3ef8b24277 core[patch]: utils.guard_import fix (#21133)
Issues (nit): 
1. `utils.guard_import` prints wrong error message when there is an
import `error.` It prints the whole `module_name` but should be only the
first part as the pip package name. E.i. `langchain_core.utils` -> print
not `langchain-core` but `langchain_core.utils`. Also replace '_' with
'-' in the pip package name.
2. it does not handle the `ModuleNotFoundError` which raised if
`guard_import("wrong_module")`

Fixed issues; added ut-s. Controversial: I've reraised
`ModuleNotFoundError` as `ImportError`, since in case of the error, the
proposed action is the same - we need to install a missed package.
2024-05-03 17:21:36 -04:00
Erick Friis
36c2ca3c8b mistralai: relax tokenizers dep (#21277) 2024-05-03 14:16:22 -07:00
Nuno Campos
6e1e0c7d5c fix: core: draw_mermaid() would create subgroup for edges with same src and tgt (#21275)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-03 13:51:08 -07:00
Eugene Yurtsev
26a37dce0a langchain[patch]: Remove jsonpatch from poetry file (#21272)
jsonpatch is only used in langchain-core not in langchain
2024-05-03 15:46:05 -04:00
Eugene Yurtsev
335bd01e45 langchain[patch]: Update deprecation warning (#21268)
Update deprecation warning
2024-05-03 15:31:29 -04:00
Leonid Ganeline
23a05c3986 langchain: summarize chain fix (#21266)
Issue: `load_summarize_chain` is placed in the __init__.py file. As a
result, it doesn't listed in the API Reference docs.
Change: moved code from __init__.py into a new file.
2024-05-03 14:44:39 -04:00
ccurme
6da3d92b42 (all): update removal in deprecation warnings from 0.2 to 0.3 (#21265)
We are pushing out the removal of these to 0.3.

`find . -type f -name "*.py" -exec sed -i ''
's/removal="0\.2/removal="0.3/g' {} +`
2024-05-03 14:29:36 -04:00
Eugene Yurtsev
d6e34f9ee5 langchain[patch]: Improve deprecation warnings (#21262)
* Remove spurious derprecation warning
* Make deprecation warnings consistent with 0.1 namespaces that were announced as deprecated
2024-05-03 13:40:16 -04:00
Eugene Yurtsev
487aff7e46 langchain[patch]: Revert 20794 until 0.2 release (#21257)
PR of 2079 was already released as part of 0.1.17rc.


Issue for 0.2 release:
https://github.com/langchain-ai/langchain/issues/21080
2024-05-03 17:02:48 +00:00
Eugene Yurtsev
ba4a309d98 langchain[patch]: Revert breaking change until 0.2 release (#21256)
Reverts a minor breaking change until 0.2 release
2024-05-03 09:42:27 -07:00
Eugene Yurtsev
66a1e3f083 langchain[patch]: Fix flaky unit test (#21258)
Should sort the results of the import test since it depends on import order
2024-05-03 15:55:46 +00:00
Eugene Yurtsev
0989c48028 langchain[minor]: Re-add deleted ainetwork tool (#21254)
* Adding __init__.py to turn it into a package in community
* Adding proxy imports that assume that langchain_community is optional
2024-05-03 11:39:40 -04:00
Christophe Bornet
2fbe82f5e6 community[minor]: Relax constraints on CassandraChatMessageHistory constructor (#21241) 2024-05-03 10:20:39 -04:00
Chris Germann
3a8d1d8838 Hotfix RetrievalQA Docs: docs: Fix formatting (#21183)
# Newline Characters breaking formatting 

**Description**: 
As you can see in the image below, the formatting in the documentation
is broken. As far as I can see the two added `\n` characters are
breaking the documentation. Therefore I would propose to remove those

![image](https://github.com/langchain-ai/langchain/assets/88305668/23b6e726-71b2-4812-91ea-3e8600683733)

**Dependencies**:
None

**Twitter Handle**
- epu9byj

---------

Co-authored-by: gere <gere@kapo.zh.ch>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-03 12:46:29 +00:00
andyjessen
64e17bd793 docs: Fix comment within "handle long text" example (#21248)
The current doc-string comment is referring to the wrong schema.
2024-05-03 12:36:53 +00:00
Daniel Glogowski
c3d169ab00 docs: Update Nvidia documentation (#21240)
Updating Nvidia docs ahead for 5/15 competition. 

Thanks!
2024-05-03 12:29:03 +00:00
Bagatur
70bde15480 docs: add tool choice to tool calling (#21229) 2024-05-03 03:10:22 -04:00
Bagatur
67a5cc34c6 openai[patch]: Release 0.1.6 (#21236) 2024-05-03 04:10:39 +00:00
Erick Friis
c1eb95b967 core: release 0.1.50 (#21230) 2024-05-02 22:44:18 +00:00
Nuno Campos
47ce8d5a57 core: tracer: remove numeric execution order (#21220)
- this hasn't been used in a long time and requires some additional
bookkeeping i'm going to streamline in the next pr
2024-05-02 15:38:55 -07:00
Bagatur
6ac6158a07 openai[patch]: support tool_choice="required" (#21216)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-02 18:33:25 -04:00
Erick Friis
aa9faa8512 docs: model table keywords, remove tool calling from llm (#21225) 2024-05-02 21:04:29 +00:00
xindoo
c1aa237bc2 langchain: fix syntax error in code comment for create_tool_calling_agent (#21205)
**PR message**:
- **Description:** Corrected a syntax error in the code comments within
the `create_tool_calling_agent` function in the langchain package.
- **Issue:** N/A
- **Dependencies:** No additional dependencies required.
- **Twitter handle:** N/A
2024-05-02 19:17:23 +00:00
ccurme
eb0a2fd53a mistral: release 0.1.6 (#21214) 2024-05-02 13:59:19 -04:00
ccurme
2d77e5e3a1 (standard tests): add test for basic conversation sequence (#21213) 2024-05-02 13:47:10 -04:00
Maxime Perrin
1ebb5a70ad partners(mistralai): Removing unused variable in completion request (using tool_calls or content) (#21201)
This PR fixes #21196.

The error was occurring when calling chat completion API with a chat
history. Indeed, the Mistral API does not accept both `content` and
`tool_calls` in the same body.

This PR removes one of theses variables depending on the necessity.

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-02 13:20:14 -04:00
Christophe Bornet
683fb45c6b community[patch]: Refactor CassandraDatabase wrapper (#21075)
* Introduce individual `fetch_` methods for easier typing.
* Rework some docstrings to google style
* Move some logic to the tool
* Merge the 2 cassandra utility files
2024-05-02 13:13:08 -04:00
Bagatur
b00fd1dbde infra: Undo gh cache removal (#21210)
Co-authored-by: Nuno Campos <nuno@langchain.dev>
2024-05-02 17:12:32 +00:00
Aditya
ee2c55ca09 docs: Added documentation on Anthropic models on vertex (#21070)
Description:Added documentation on Anthropic models on Vertex
@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-05-02 13:12:01 -04:00
Raghav Dixit
7d451d0041 community[patch]: Update lancedb.py (#21192)
very minor update in LanceDB integration, 'metric' argument was missing.
2024-05-02 17:06:39 +00:00
Bagatur
d297d90ad9 core[patch]: Release 0.1.49 (#21211) 2024-05-02 17:06:27 +00:00
Nuno Campos
663747b730 core[patch]: Fixes for convert_messages (#21207)
- support two-tuples of any sequence type (eg. json.loads never produces
tuples)
- support type alias for role key
- if id is passed in in dict form use it
- if tool_calls passed in in dict form use them

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-02 16:55:42 +00:00
Eugene Yurtsev
df49404794 langchain[patch]: Make more memory code handle community dependency as optional (#21199) 2024-05-02 11:05:26 -04:00
ccurme
bd5d2c2674 langchain: import InMemoryChatMessageHistory from core (#21198) 2024-05-02 14:53:07 +00:00
Eugene Yurtsev
3cd7fced5f langchain[patch],community[minor]: Migrate memory implementations to community (#20845)
Migrates memory implementations to community
2024-05-02 10:46:50 -04:00
Eugene Yurtsev
b5c3a04e4b langchain[patch]: chat histories to handle optional community dependence (#21194) 2024-05-02 10:36:08 -04:00
Eugene Yurtsev
c9119b0e75 langchain[patch],community[minor]: Move some unit tests from langchain to community, use core for fake models (#21190) 2024-05-02 09:57:52 -04:00
Eugene Yurtsev
c306364b06 langchain[patch]: Update more code to use langchain community as an optional dependency (#21170)
More code to use langchain community as an optional dependency
2024-05-02 09:05:48 -04:00
Erick Friis
cd4c54282a infra: cleanup docs build (#21134)
Refactors the docs build in order to:
- run the same `make build` command in both vercel and local build
- incrementally build artifacts in 2 distinct steps, instead of building
all docs in-place (in vercel) or in a _dist dir (locally)

Highlights:
- introduces `make build` in order to build the docs
- collects and generates all files for the build in
`docs/build/intermediate`
- renders those jupyter notebook + markdown files into
`docs/build/outputs`

And now the outputs to host are in `docs/build/outputs`, which will need
a vercel settings change.

Todo:
- [ ] figure out how to point the right directory (right now deleting
and moving docs dir in vercel_build.sh isn't great)
2024-05-01 17:34:05 -07:00
Bagatur
6fa8626e2f openai[patch]: fix azure open lc serialization, release 0.1.5 (#21159) 2024-05-01 18:03:29 -04:00
Eugene Yurtsev
94a838740e langchain[patch]: Migrate more code in utils to use optional langchain import (#21166)
Moving is interactive util to avoid circular deps
2024-05-01 17:18:42 -04:00
Eugene Yurtsev
23fdd320bc langchain[patch]: Migrate more code to use optional community in agents namespace (#21167) 2024-05-01 16:25:44 -04:00
Tomaz Bratanic
9e53fa7d2e Some more fixes to neo4j enhanced schema (#21139) 2024-05-01 13:12:43 -07:00
Erick Friis
0694538c39 ai21: fix core version (#21168) 2024-05-01 13:10:22 -07:00
Eugene Yurtsev
44602bdc20 langchain[patch],community[minor]: Move load_tools to community (#21158)
Move load tools to community
2024-05-01 16:05:41 -04:00
Eugene Yurtsev
9932f49b3e langchain[patch]: Migrate llms to use optional community imports (#21101) 2024-05-01 16:04:45 -04:00
Eugene Yurtsev
57e8e70daa langchain[patch]: Migrate chat models to optional community imports (#21090)
Migrate chat models to optional community imports
2024-05-01 16:04:12 -04:00
Eugene Yurtsev
2914abd747 langchain[patch]: Fix how the serializable test identifies serializable objects (#21165)
dir() will not work if we're using optional imports. The only way to do this is by using contents of __all__
2024-05-01 15:56:11 -04:00
Eugene Yurtsev
23c5d87311 langchain[patch]: Migrate utils to use optional langchain_community (#21163)
Migrate utils to use optional imports from langchain community
2024-05-01 15:24:02 -04:00
Eugene Yurtsev
bec3eee3fa langchain[patch]: Migrate retrievers to use optional langchain community imports (#21155) 2024-05-01 14:44:44 -04:00
Eugene Yurtsev
43110daea5 langchain[patch]: Update some agent tool kits to handle community import as optional (#21157)
A few things that were not caught by the migration script
2024-05-01 14:22:54 -04:00
Eugene Yurtsev
59f10ab3e0 langchain[patch]: Migrate embeddings to optional imports (#21099) 2024-05-01 13:47:37 -04:00
Eugene Yurtsev
2f709d94d7 langchain[patch]: Migrate vectorstores to use optional langchain community imports (#21150) 2024-05-01 13:33:37 -04:00
Eugene Yurtsev
7230e430db langchain[patch]: Migrate top level files to use optional langchain community (#21152)
Migrate a few top level files to treat langchain community as an optional dependency
2024-05-01 13:23:03 -04:00
Erick Friis
daab9789a8 ai21: release 0.1.4 (#21151) 2024-05-01 17:16:27 +00:00
Asaf Joseph Gardin
642975dd9f partners: AI21 Labs Jamba Support (#20815)
Description: Added support for AI21 new model - Jamba
Twitter handle: https://github.com/AI21Labs

---------

Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-01 10:12:44 -07:00
Eugene Yurtsev
7a39fe60da langchain[patch]: Migrate utilities to handle langchain community as optional (#21149) 2024-05-01 13:09:34 -04:00
Eugene Yurtsev
b879184595 langchain[patch]: embedddings distance move import of openai embeddings into local scope (#21148) 2024-05-01 12:51:51 -04:00
Bagatur
8b4b75e543 docs: standardize vertexai params (#20167)
Related to #20085

Requires https://github.com/langchain-ai/langchain-google/pull/121
2024-05-01 11:42:18 -04:00
Eugene Yurtsev
0e5bf16d00 langchain[patch]: Migrate document loaders to use optional langchain community imports (#21095) 2024-05-01 11:26:25 -04:00
Jacob Lee
bd38073d76 👥 Update LangChain people data (#21143)
👥 Update LangChain people data

Co-authored-by: github-actions <github-actions@github.com>
2024-05-01 11:01:43 -04:00
Harrison Chase
4d1c21d97d community[patch]: Fix alternative name in deprecation notice for sql_database (#21144) 2024-05-01 10:59:42 -04:00
East Agile
2a6f78a53f community[minor]: Rememberizer retriever (#20052)
**Description:**
This pull request introduces a new feature for LangChain: the
integration with the Rememberizer API through a custom retriever.
This enables LangChain applications to allow users to load and sync
their data from Dropbox, Google Drive, Slack, their hard drive into a
vector database that LangChain can query. Queries involve sending text
chunks generated within LangChain and retrieving a collection of
semantically relevant user data for inclusion in LLM prompts.
User knowledge dramatically improved AI applications.
The Rememberizer integration will also allow users to access general
purpose vectorized data such as Reddit channel discussions and US
patents.

**Issue:**
N/A

**Dependencies:**
N/A

**Twitter handle:**
https://twitter.com/Rememberizer
2024-05-01 10:41:44 -04:00
Eugene Yurtsev
1ce1a10f2b langchain[patch],community[minor]: Move graph index creator (#20795)
Move graph index creator to community
2024-05-01 10:04:30 -04:00
Eugene Yurtsev
aa0bc7467c langchain[patch]: Migrate agents module into optional imports for community (#21088) 2024-05-01 09:36:03 -04:00
Eugene Yurtsev
86ff8a3fb4 langchain[patch]: Update docstore module to use optional imports from community (#21091) 2024-05-01 09:35:05 -04:00
Eugene Yurtsev
d640605694 langchain[patch]: Migrate chat loaders to optional community imports (#21089)
Migrate chat loaders to optional community imports
2024-05-01 09:34:44 -04:00
Charlie Marsh
2b10c4dd52 ci: Use ruff check in Makefile (#21138)
## Summary

`ruff /path/to/file.py` works but is deprecated, and we now recommend
`ruff check /path/to/file.py` (to match `ruff format /path/to/file.py`).
2024-05-01 09:34:15 -04:00
Eugene Yurtsev
2fcab9acd9 langchain[patch]: Upgrade storage to treat langchain community as optional (#21105) 2024-05-01 09:33:31 -04:00
William FH
ab55f6996d [Core] Tracing: update parent run_tree's child_runs (#21049) 2024-05-01 06:33:08 -07:00
Abhishek Bhagwat
86fe484e24 docs: Docs (sample notebook) for Vertex DIY RAG Ranking API (#21054)
Vertex DIY RAG APIs helps to build complex RAG systems and provide more
granular control, and are suited for custom use cases.

The Ranking API takes in a list of documents and reranks those documents
based on how relevant the documents are to a given query. Compared to
embeddings that look purely at the semantic similarity of a document and
a query, the ranking API can give you a more precise score for how well
a document answers a given query.


[Reference](https://cloud.google.com/generative-ai-app-builder/docs/ranking)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-01 05:39:39 +00:00
Stuart Leeks
8a01760a0f infra: Sync devcontainer.json and compose file mount location (#20461)
**Sync the config in `devcontainer.json` and `docker-compose.yml`**

Issue: when opening the current `master` branch in a dev container in VS
Code, I get the following message as VS Code cannot find the mounted
source folder:


![image](https://github.com/langchain-ai/langchain/assets/1824461/41cf20c0-d1e0-4648-9578-edf80b99c2db)

Opening in a GitHub Codespace works (it seems to ignore the mounts in
the `docker-compose.yml`.

This PR updates the mount in `docker-compose.yml` and the config in
`devcontainer.json` so that the two align.

I have tested these changes in GitHub Codespaces and a VS Code dev
container and both loaded successfully.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-01 01:32:12 -04:00
aditya thomas
12b1caf295 openai[patch]: add tests for secret_str for keys (#20982)
**Description:** Add tests to check API keys and Active Directory tokens
are masked
**Issue:** Resolves #12165 for OpenAI and Azure OpenAI models
**Dependencies:** None

Also resolves #12473 which may be closed.

Additional contributors @alex4321 (#12473) and @onesolpark (#12542)
2024-05-01 01:26:20 -04:00
Noah
45ddf4d26f community[patch]: Update comments for lazy_load method (#21063)
- [ ] **PR message**: 
- **Description:** Refactored the lazy_load method to use asynchronous
execution for improved performance. The method now initiates scraping of
all URLs simultaneously using asyncio.gather, enhancing data fetching
efficiency. Each Document object is yielded immediately once its content
becomes available, streamlining the entire process.
    - **Issue:** N/A
- **Dependencies:** Requires the asyncio library for handling
asynchronous tasks, which should already be part of standard Python
libraries in Python 3.7 and above.
    - **Email:** [r73327118@gmail.com](mailto:r73327118@gmail.com)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-01 01:20:57 -04:00
Liu Xiaodong
3b473d10f2 experimental: clean python repl input(experimental:Added code for PythonREPL) (#20930)
Update python.py(experimental:Added code for PythonREPL)

Added code for PythonREPL, defining a static method 'sanitize_input'
that takes the string 'query' as input and returns a sanitizing string.
The purpose of this method is to remove unwanted characters from the
input string, Specifically:

1. Delete the whitespace at the beginning and end of the string (' \s').
2. Remove the quotation marks (`` ` ``) at the beginning and end of the
string.
3. Remove the keyword "python" at the beginning of the string (case
insensitive) because the user may have typed it.

This method uses regular expressions (regex) to implement sanitizing.

It all started with this code:
from langchain.agents import Tool
from langchain_experimental.utilities import PythonREPL

python_repl = PythonREPL()
repl_tool = Tool(
    name="python_repl",
description="Remove redundant formatting marks at the beginning and end
of source code from input.Use a Python shell to execute python commands.
If you want to see the output of a value, you should print it out with
`print(...)`.",
    func=python_repl.run,
)

When I call the agent to write a piece of code for me and execute it
with the defined code, I must get an error: SyntaxError('invalid
syntax', ('<string>', 1, 1,'In', 1, 2))

After checking, I found that pythonREPL has less formatting of input
code than the soon-to-be deprecated pythonREPL tool, so I added this
step to it, so that no matter what code I ask the agent to write for me,
it can be executed smoothly and get the output result.
I have tried modifying the prompt words to solve this problem before,
but it did not work, and by adding a simple format check, the problem is
well resolved.
<img width="1271" alt="image"
src="https://github.com/langchain-ai/langchain/assets/164149097/c49a685f-d246-4b11-b655-fd952fc2f04c">

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-01 05:19:09 +00:00
Ismail Hossain Polas
1fdf63fa6c community[patch]: update package name to bagelML (#19948)
**Description**
This pull request updates the Bagel Network package name from
"betabageldb" to "bagelML" to align with the latest changes made by the
Bagel Network team.

The following modifications have been made:

- Updated all references to the old package name ("betabageldb") with
the new package name ("bagelML") throughout the codebase.
- Modified the documentation, and any relevant scripts to reflect the
package name change.
- Tested the changes to ensure that the functionality remains intact and
no breaking changes were introduced.

By merging this pull request, our project will stay up to date with the
latest Bagel Network package naming convention, ensuring compatibility
and smooth integration with their updated library.

Please review the changes and provide any feedback or suggestions. Thank
you!
2024-05-01 01:17:33 -04:00
Tomaz Bratanic
7860e4c649 experimental[patch]: Add support for non-function calling LLMs in llm graph transformers (#21014) 2024-05-01 01:16:07 -04:00
Erick Friis
67e6744e0f docs: fix some notebook formatting (#21136) 2024-04-30 21:39:03 -07:00
tianzedavid
5a8909440b docs: remove repetitive words (#21058)
remove repetitive words

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-01 01:10:42 +00:00
Leonid Kuligin
a36935b520 docs: updated docs on langchain_google_community (#21064)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: updated docs on langchain_google_community"


- [ ] **PR message**:
    - **Description:** updated docs on langchain_google_community
2024-04-30 20:20:49 -04:00
Tomaz Bratanic
c9e96bb5e2 community[patch]: Fix neo4j enhanced schema bugs (#21072) 2024-04-30 20:16:26 -04:00
junkeon
8d2909ee25 upstage[minor]: Update few codes and add upstage loader in pdf section (#21085)
**Description:** Update UpstageLayoutAnalysisParser and Loader and add
upstage loader example in pdf section
**Dependencies:** langchain_community
**Twitter handle:** [@upstageai](https://twitter.com/upstageai)

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-30 20:15:49 -04:00
Bagatur
bef50ded63 openai[patch]: fix special token default behavior (#21131)
By default handle special sequences as regular text
2024-04-30 20:08:24 -04:00
MacanPN
0f7f448603 community[patch]: add delete() method to AzureSearch vector store (#21127)
**Issue:**
Currently `AzureSearch` vector store does not implement `delete` method.
This PR implements it. This also makes it compatible with LangChain
indexer.

**Dependencies:**
None

**Twitter handle:**
@martintriska1

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 23:46:18 +00:00
Jorge Piedrahita Ortiz
3441a11b21 docs: minor changes in sambanova community integration docs (#21129)
- **Description:** minor changes in sambanova community integration
notebook docs

---------

Co-authored-by: Renate Kempf <165940384+renate-snova@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 23:44:26 +00:00
Bagatur
6d3e9eaf84 docs: format (#21132) 2024-04-30 23:32:41 +00:00
Erick Friis
14422a4220 langchain: fix core dep (#21128) 2024-04-30 14:55:12 -07:00
Erick Friis
6c938da302 langchain: release 0.1.17 (#21125) 2024-04-30 14:43:59 -07:00
Erick Friis
5f8a307565 infra: same tagging for langchain (#21126) 2024-04-30 14:43:45 -07:00
Eugene Yurtsev
bf95414758 langchain[minor]: enhance unit test to test imports recursively (#21122) 2024-04-30 17:05:53 -04:00
Eugene Yurtsev
e4f51f59a2 langchain[patch]: Migrate tools to treat community imports as optional (#21117)
Migrate tools to treat community imports as optional
2024-04-30 16:26:18 -04:00
Eugene Yurtsev
9e788f09c6 langchain[patch]: Migrate output parsers to support optional community imports (#21103)
Migrate output parsers
2024-04-30 16:24:29 -04:00
Eugene Yurtsev
3853fe9f64 langchain[patch]: Migrate graphs to use optional community imports (#21100)
Migrate graphs to use optional community imports.
2024-04-30 16:24:06 -04:00
Eugene Yurtsev
8658d52587 langchain[patch]: Upgrade prompts to optional imports (#21078)
Upgrades prompts module to use optional imports.

This code was generated with a migration script, but had to be adjusted
manually a bit.

Testing in preparation for applying this code modification across the
rest of the modules in langchain package to reverse the dependency
between langchain community and langchain.
2024-04-30 16:23:39 -04:00
Eugene Yurtsev
9b6d04a187 langchain[patch]: Migrate document transformers (#21098)
Migrate document transformers
2024-04-30 16:20:02 -04:00
Eugene Yurtsev
aec13a6123 langchain[patch]: Migrate callbacks module to use optional imports for community (#21086) 2024-04-30 16:19:13 -04:00
Erick Friis
8a62fb0570 community: release 0.0.36 (#21118) 2024-04-30 13:18:44 -07:00
Erick Friis
2407c353be core: release 0.1.48 (#21113) 2024-04-30 19:52:36 +00:00
Erick Friis
dbdfa3d34e infra: fix minimum version install to force pypi install (#21112) 2024-04-30 12:41:26 -07:00
Charlie Marsh
fd94aa8366 partner[patch]: Upgrade to Ruff v0.4.2 (#21108)
## Summary

No new diagnostics (given that the set of enabled rules hasn't changed),
but gains access to our new parser (much faster) and reduced false
positives all around.
2024-04-30 15:06:42 -04:00
Jamsheed Mistri
3e749369ef community[minor]: bump version of LayerupSecurity, add support for untrusted_input parameter (#19985)
**Description:** update version of LayerupSecurity package for the
Layerup Security integration. Add untrusted_input parameter.
2024-04-30 14:55:26 -04:00
fubuki8087
f1c3687aa5 community[patch]: Using the right encoding to parse the web page in RecursiveUrlLoader (#20632)
As shown in #13749 , `RecursiveUrlLoader` has encoding issue. This PR is
to solve this.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 18:41:36 +00:00
Jakub Pawłowski
b0b1a67771 community[patch]: Skip unexpected 404 HTTP Error in Arxiv download (#21042)
### Description:
When attempting to download PDF files from arXiv, an unexpected 404
error frequently occurs. This error halts the operation, regardless of
whether there are additional documents to process. As a solution, I
suggest implementing a mechanism to ignore and communicate this error
and continue processing the next document from the list.

Proposed Solution: To address the issue of unexpected 404 errors during
PDF downloads from arXiv, I propose implementing the following solution:

- Error Handling: Implement error handling mechanisms to catch and
handle 404 errors gracefully.
- Communication: Inform the user or logging system about the occurrence
of the 404 error.
- Continued Processing: After encountering a 404 error, continue
processing the remaining documents from the list without interruption.

This solution ensures that the application can handle unexpected errors
without terminating the entire operation. It promotes resilience and
robustness in the face of intermittent issues encountered during PDF
downloads from arXiv.

### Issue:
#20909 
### Dependencies:
none

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-30 18:29:22 +00:00
Erick Friis
b9c53e95b7 community: release 0.0.35 (#21104) 2024-04-30 17:48:56 +00:00
Eugene Yurtsev
3c064a757f core[minor],langchain[patch],community[patch]: Move storage interfaces to core (#20750)
* Move storage interface to core
* Move in memory and file system implementation to core
2024-04-30 13:14:26 -04:00
Charlie Marsh
8f38b7a725 multiple: Remove unnecessary Ruff suppression comments (#21050)
## Summary

I ran `ruff check --extend-select RUF100 -n` to identify `# noqa`
comments that weren't having any effect in Ruff, and then `ruff check
--extend-select RUF100 -n --fix` on select files to remove all of the
unnecessary `# noqa: F401` violations. It's possible that these were
needed at some point in the past, but they're not necessary in Ruff
v0.1.15 (used by LangChain) or in the latest release.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-30 17:13:48 +00:00
Erick Friis
748f2ba9ea core: release 0.1.47 (#21094) 2024-04-30 09:22:05 -07:00
Erick Friis
efe27ef849 infra: tag non-langchain releases (#20805) 2024-04-30 16:15:46 +00:00
Eugene Yurtsev
c8f18a2524 langchain[patch]: Update import handling in adapters (#21079) 2024-04-30 10:55:29 -04:00
William FH
5c63ac3dd7 [Patch] Dedent docstring (#20959)
Technically a slight prompt breaking change, but I think positive EV in
that it saves tokens and results in more sane / in-distribution prompts
2024-04-30 07:40:57 -07:00
Eugene Yurtsev
845d8e0025 langchain[patch]: Update handling of deprecation warnings (#21083)
Chains should not be emitting deprecation warnings.
2024-04-30 10:30:23 -04:00
Christophe Bornet
5c77f45b06 community[minor]: Add async methods to CassandraCache and CassandraSemanticCache (#20654) 2024-04-30 10:27:44 -04:00
Christophe Bornet
d6e9bd3011 docs: Bump cassio min version in docs (#21081)
Cassio 0.6+ is recommended for async vector store (not blocking on
getting the embedding dimension) and for hybrid search support.
2024-04-30 10:25:37 -04:00
William FH
db14d4326d [Core] Feat Pretty Print Tool calls (#20997)
Right now, `tool_calls` are not included in the `pretty_print()` output.
Would be nice to show!


![image](https://github.com/langchain-ai/langchain/assets/13333726/6a0ffca3-d02f-4e18-bc76-513eeca2e964)
2024-04-30 07:14:43 -07:00
Kuro Denjiro
fa4124b821 community[minor]: add mintbase loader to langchain (#20089)
- [x] **Add Near NFT loader**: "community: Load NFT near block chain
using mintbase graph API"

- [x] **PR message**: 
    - **Description:** a description of the change
    - **Twitter handle:**Kurodenjiro

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 04:11:56 +00:00
Alexander Dicke
d7e12750df community[patch]: allows using text-generation-inference /generate route with HuggingFaceEndpoint (#20100)
- **Description:** allows to use the /generate route of
`text-generation-inference` with the `HuggingFaceEndpoint`
2024-04-29 23:09:55 -04:00
Jonathan Evans
ea43c669f2 community[patch]: Fix Bedrock Mistral stop sequence request key (#20115)
- **Description:** Change Bedrock's Mistral stop sequence key mapping to
"stop" rather than "stop_sequences" which is the correct key [Bedrock
docs
link](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral.html)
`{
    "prompt": string,
    "max_tokens" : int,
    "stop" : [string],    
    "temperature": float,
    "top_p": float,
    "top_k": int
}`
- **Issue:** #20053 
- **Dependencies:** N/A
- **Twitter handle:** N/a
2024-04-29 20:14:36 -04:00
davidkgp
28b0b0d863 community[patch]: Fix for github issue #17690 (#20117)
…/17690

Thank you for contributing to LangChain!

- [x] **Fix Google Lens knowledge graph issue**: "langchain: community"
- Fix for [No "knowledge_graph" property in Google Lens API call from
SerpAPI](https://github.com/langchain-ai/langchain/issues/17690)


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** handled the existence of keys in the json response of
Google Lens
- **Issue:** [No "knowledge_graph" property in Google Lens API call from
SerpAPI](https://github.com/langchain-ai/langchain/issues/17690)



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-30 00:10:08 +00:00
高远
a7a4630bf4 community[patch]: Modify the text field type and add new exception handling (#20116)
Co-authored-by: gaoyuan <gaoyuan.20001218@bytedance.com>
2024-04-29 20:06:00 -04:00
Rahul Triptahi
c172611647 community[patch]: Add classifier_url argument in PebbloSafeLoader and documentation update. (#21030)
Description: Add classifier_url argument in PebbloSafeLoader.
Documentation: Updated PebbloSafeLoader documentation with above change
and new links for pebblo github pages.

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-29 17:41:09 -04:00
Leonid Ganeline
08d08d7c83 docs: langchain docstrings updates (#21032)
Added missed docstings. Formatted docstrings into a consistent format.
2024-04-29 17:40:44 -04:00
Leonid Ganeline
85094cbb3a docs: community docstring updates (#21040)
Added missed docstrings. Updated docstrings to consistent format.
2024-04-29 17:40:23 -04:00
Rodrigo Nogueira
90f19028e5 community[patch]: Add maritalk streaming (sync and async) (#19203)
Co-authored-by: RosevalJr <rdmalajr@gmail.com>
Co-authored-by: Roseval Donisete Malaquias Junior <roseval@maritaca.ai>
2024-04-29 21:31:14 +00:00
Cahid Arda Öz
cc6191cb90 community[minor]: Add support for Upstash Vector (#20824)
## Description

Adding `UpstashVectorStore` to utilize [Upstash
Vector](https://upstash.com/docs/vector/overall/getstarted)!

#17012 was opened to add Upstash Vector to langchain but was closed to
wait for filtering. Now filtering is added to Upstash vector and we open
a new PR. Additionally, [embedding
feature](https://upstash.com/docs/vector/features/embeddingmodels) was
added and we add this to our vectorstore aswell.

## Dependencies

[upstash-vector](https://pypi.org/project/upstash-vector/) should be
installed to use `UpstashVectorStore`. Didn't update dependencies
because of [this comment in the previous
PR](https://github.com/langchain-ai/langchain/pull/17012#pullrequestreview-1876522450).

## Tests

Tests are added and they pass. Tests are naturally network bound since
Upstash Vector is offered through an API.

There was [a discussion in the previous PR about mocking the
unittests](https://github.com/langchain-ai/langchain/pull/17012#pullrequestreview-1891820567).
We didn't make changes to this end yet. We can update the tests if you
can explain how the tests should be mocked.

---------

Co-authored-by: ytkimirti <yusuftaha9@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 17:25:01 -04:00
Leonid Ganeline
1a2ff56cd8 core[patch[: docstring update (#21036)
Added missed docstrings. Updated docstrings to consistent format.
2024-04-29 15:35:34 -04:00
Eugene Yurtsev
f479a337cc langchain[patch]: replace deprecated imports with imports from langchain_core (#21033)
* Output of running the migration script.
* Ran only against langchain code itself and not the unit tests.
2024-04-29 15:34:31 -04:00
Eugene Yurtsev
82d4afcac0 langchain[minor]: Code to handle dynamic imports (#20893)
Proposing to centralize code for handling dynamic imports. This allows treating langchain-community as an optional dependency.

---

The proposal is to scan the code base and to replace all existing imports with dynamic imports using this functionality.
2024-04-29 15:34:03 -04:00
Erick Friis
854ae3e1de mistralai: release 0.1.5, allow client passing in (#21034) 2024-04-29 17:14:26 +00:00
chyroc
3e241956d3 community[minor]: add coze chat model (#20770)
add coze chat model, to call coze.com apis
2024-04-29 12:26:16 -04:00
Eugene Yurtsev
29493bb598 cli[minor]: improve confirmation message with more details (#21027)
Improve confirmation message with more details
2024-04-29 12:20:42 -04:00
Eugene Yurtsev
aab78a37f3 cli[patch]: Ignore imports that change the name of the class (#21026)
Not currently handeled by migration script
2024-04-29 12:20:30 -04:00
Massimiliano Pronesti
ce89b34fc0 community[patch]: support hybrid search with threshold in Azure AI Search Retriever (#20907)
Support hybrid search with a score threshold -- similar to what we do
for similarity search.
2024-04-29 12:11:44 -04:00
Andrei Panferov
b3efa38cc0 community[patch]: GigaChat model selection fix (#20988)
Fixed the error that the model name is never actually put into GigaChat
request payload, always defaulting to `GigaChat-Lite`.

With this fix, model selection through
```python
import os
from langchain.chat_models.gigachat import GigaChat

chat = GigaChat(
    name="GigaChat-Pro", # <- HERE!!!!!
    ...
)
```
should actually work, as intended in
[here](804390ba4b/libs/community/langchain_community/llms/gigachat.py (L36)).

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-29 16:08:26 +00:00
Patrick McFadin
3331865f6b community[minor]: add Cassandra Database Toolkit (#20246)
**Description**: ToolKit and Tools for accessing data in a Cassandra
Database primarily for Agent integration. Initially, this includes the
following tools:
- `cassandra_db_schema` Gathers all schema information for the connected
database or a specific schema. Critical for the agent when determining
actions.
- `cassandra_db_select_table_data` Selects data from a specific keyspace
and table. The agent can pass paramaters for a predicate and limits on
the number of returned records.
- `cassandra_db_query` Expiriemental alternative to
`cassandra_db_select_table_data` which takes a query string completely
formed by the agent instead of parameters. May be removed in future
versions.

Includes unit test and two notebooks to demonstrate usage. 

**Dependencies**: cassio
**Twitter handle**: @PatrickMcFadin

---------

Co-authored-by: Phil Miesle <phil.miesle@datastax.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 15:51:43 +00:00
Igor Brai
b3e74f2b98 community[minor]: add mojeek search util (#20922)
**Description:** This pull request introduces a new feature to community
tools, enhancing its search capabilities by integrating the Mojeek
search engine
**Dependencies:** None

---------

Co-authored-by: Igor Brai <igor@mojeek.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-29 15:49:53 +00:00
hmn falahi
4822beb298 Ignore self/cls from required args of class functions in convert_to_openai_tool (#20691)
Removed redundant self/cls from required args of class functions in
_get_python_function_required_args:

```python
class MemberTool:
    def search_member(
            self,
            keyword: str,
            *args,
            **kwargs,
    ):
        """Search on members with any keyword like first_name, last_name, email

        Args:
            keyword: Any keyword of member
        """

        headers = dict(authorization=kwargs['token'])
        members = []
        try:
            members = request_(
                method='SEARCH',
                url=f'{service_url}/apiv1/members',
                headers=headers,
                json=dict(query=keyword),
            )

        except Exception as e:
            logger.info(e.__doc__)

        return members

convert_to_openai_tool(MemberTool.search_member)
```
expected result:
```
{'type': 'function', 'function': {'name': 'search_member', 'description': 'Search on members with any keyword like first_name, last_name, username, email', 'parameters': {'type': 'object', 'properties': {'keyword': {'type': 'string', 'description': 'Any keyword of member'}}, 'required': ['keyword']}}}
```

#20685

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 11:46:26 -04:00
Rahul Triptahi
a64a1943fd docs: Document update for load_extended_matadata in GoogleDriveLoader (#20950)
Document: Updated google_drive,ipynb for loading following extended
metadata.
 - full_path - Full path of the file/s in google drive.
 - owner - owner of the file/s.
 - size - size of the file/s.

Code changes:
[langchain-google/pull/179.](https://github.com/langchain-ai/langchain-google/pull/179)

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-29 11:41:57 -04:00
Eugene Yurtsev
4f4ee8e2cf cli[patch]: Update migrations file manually (#21021)
We need to replace occurrences in the code of RunnableMap not just the
import,
so for now, we don't replace RunnableMap.
2024-04-29 10:53:31 -04:00
Tomaz Bratanic
67428c4052 community[patch]: Neo4j enhanced schema (#20983)
Scan the database for example values and provide them to an LLM for
better inference of Text2cypher
2024-04-29 10:45:55 -04:00
Leonid Kuligin
dc70c23a11 docs: switched GCSLoaders docs to langchain-google-community (#20985)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: switched GCSLoaders docs to
langchain-google-community"

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** switched GCSLoaders docs to
langchain-google-community
2024-04-29 10:45:11 -04:00
aditya thomas
8b59bddc03 anthropic[patch]: add tests for secret_str for api key (#20986)
**Description:** Add tests to check API keys are masked
**Issue:** Resolves
https://github.com/langchain-ai/langchain/issues/12165 for Anthropic
models
**Dependencies:** None
2024-04-29 10:39:14 -04:00
Pengcheng Liu
1fad39be1c community[minor]: Add LarkSuite wiki document loader. (#21016)
**Description:** Add LarkSuite wiki document loader. Refer to [LarkSuite
api document
](https://open.feishu.cn/document/server-docs/docs/wiki-v2/space-node/list)for
details.
**Issue:** None
**Dependencies:** None
**Twitter handle:** None
2024-04-29 10:37:50 -04:00
Tomaz Bratanic
d36332476c docs: Add neo4j relationship vector index docs (#20990)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 14:36:47 +00:00
Leonid Ganeline
dc7c06bc07 community[minor]: import fix (#20995)
Issue: When the third-party package is not installed, whenever we need
to `pip install <package>` the ImportError is raised.
But sometimes, the `ValueError` or `ModuleNotFoundError` is raised. It
is bad for consistency.
Change: replaced the `ValueError` or `ModuleNotFoundError` with
`ImportError` when we raise an error with the `pip install <package>`
message.
Note: Ideally, we replace all `try: import... except... raise ... `with
helper functions like `import_aim` or just use the existing
[langchain_core.utils.utils.guard_import](https://api.python.langchain.com/en/latest/utils/langchain_core.utils.utils.guard_import.html#langchain_core.utils.utils.guard_import)
But it would be much bigger refactoring. @baskaryan Please, advice on
this.
2024-04-29 10:32:50 -04:00
Karim Lalani
2ddac9a7c3 experimental[minor]: Add bind_tools and with_structured_output functions to OllamaFunctions (#20881)
Implemented bind_tools for OllamaFunctions.
Made OllamaFunctions sub class of ChatOllama.
Implemented with_structured_output for OllamaFunctions.

integration unit test has been updated.
notebook has been updated.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 14:13:33 +00:00
Eugene Yurtsev
d781560722 cli[minor]: Add ipynb support, add text_splitters (#20963) 2024-04-29 10:11:21 -04:00
Vadym Barda
5e0b6b3e75 docs: update langserve link in LCEL docs (#20992) 2024-04-29 09:06:10 -04:00
Aditya
07ce39bfe7 docs: updated tutorials for Image generation and Vector Search (#21000)
Description: docs: updated tutorials for Image generation and Vector
Search

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-04-29 09:04:11 -04:00
Aditya
17bbb7d2a5 docs: updated tutorial for Gemini versions, included safety attribute updates (#21006)
Description:updated tutorial for Gemini versions, included safety
attribute updates

@lkuligin For review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-04-29 09:01:54 -04:00
WilliamEspegren
804390ba4b community: Spider integration (#20937)
Added the [Spider.cloud](https://spider.cloud) document loader.
[Spider](https://github.com/spider-rs/spider) is the
[fastest](https://github.com/spider-rs/spider/blob/main/benches/BENCHMARKS.md)
and cheapest crawler that returns LLM-ready data.

```
- **Description:** Adds Spider data loader
- **Dependencies:** spider-client
- **Twitter handle:** @WilliamEspegren 
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: = <=>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-27 21:45:03 +00:00
Jamie Lemon
6342217b93 docs: Moves "Using PyMuPDF" to higher up the page. (#20832)
**Description:**
This PR moves the **PyMuPDF** PDF loader solution to be underneath
**PyPDF**. This is because it is the the 2nd most popular PyPI package
after **PyPDF**.

Please refer to these numbers, at the time of writing as follows:

PyPDF
https://www.pepy.tech/projects/PyPDF2
160 million

PyMuPDF
https://www.pepy.tech/projects/pymupdf
60 million

PDFPlumber
https://www.pepy.tech/projects/pdfplumber
23 million

PDFMiner
https://www.pepy.tech/projects/pdfminer
16 million

PyPDFium2
https://www.pepy.tech/projects/pypdfium2
8 million

Unstructured
https://www.pepy.tech/projects/unstructured
8 million


Please note I am an active contributor to
https://github.com/pymupdf/PyMuPDF

Many thanks!

----

**Twitter handle:**
@artifex
2024-04-27 20:40:20 +00:00
Chouaieb Nemri
8097bec472 Added LogEntry, Any, Dict, List, Optional, TypedDict imports (#20970)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: docs"

- [ ] **PR message**:
- **Description:** Uptaded docs: Rag streaming use-cases notebook with
LogEntry, Any, Dict, List, Optional, TypedDict imports
    - **Twitter handle:** c_nemri

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-27 20:13:54 +00:00
ccurme
9ec7151317 fireworks: fix integration tests (#20973) 2024-04-27 19:49:46 +00:00
William FH
9fa9f05e5d Catch System Error in ast parse (#20961)
I can't seem to reproduce, but i got this:

```
SystemError: AST constructor recursion depth mismatch (before=102, after=37)
```

And the operation isn't critical for the actual forward pass so seems
preferable to expand our caught exceptions
2024-04-26 19:31:55 -07:00
YH
2aca7fcdcf core[patch]: Enhance link extraction with query parameters (#20259)
**Description**: This update enhances the `extract_sub_links` function
within the `langchain_core/utils/html.py` module to include query
parameters in the extracted URLs.

**Issue**: N/A

**Dependencies**: No additional dependencies required for this change.

**Twitter handle**: N/A

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 02:22:36 +00:00
CT
0e917e319b docs: Add langchainhub to pip install (#20185)
Added langchainhub package in import statement which is required for
"from langchain import hub" to work.

Added sample code to add OpenAI key

Co-authored-by: Chi Yan Tang <100466443+poochiekittie@users.noreply.github.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 02:21:40 +00:00
Pamela Fox
45092a36a2 docs: Fix langgraph link (#20244)
Just a simple PR to fix a broken link. Apparently having backticks
outside a link makes it render as code.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 02:18:52 +00:00
Chip Davis
e818c75f8a infra: test directory loader multithreaded (#20281)
This is a unit test for #20230 which was a fix for using multithreaded
mode with directory loader @eyurtsev
2024-04-26 19:16:47 -07:00
Guilherme Zanotelli
f931a9ce60 community[patch]: Pass kwargs to SPARQLStore from RdfGraph (#20385)
This introduces `store_kwargs` which behaves similarly to `graph_kwargs`
on the `RdfGraph` object, which will enable users to pass `headers` and
other arguments to the underlying `SPARQLStore` object. I have also made
a [PR in `rdflib` to support passing
`default_graph`](https://github.com/RDFLib/rdflib/pull/2761).

Example usage:
```python
from langchain_community.graphs import RdfGraph

graph = RdfGraph(
    query_endpoint="http://localhost/sparql",
    standard="rdf",
    store_kwargs=dict(
        default_graph="http://example.com/mygraph"
    )
)
```

<!--If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.-->

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 01:38:29 +00:00
Chandre Van Der Westhuizen
e57cf73cf5 docs: Added MindsDB provider (#20322)
MindsDB integrates with LangChain, enabling users to deploy, serve, and
fine-tune models available via LangChain within MindsDB, making them
accessible to numerous data sources.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 01:36:08 +00:00
Jorge Piedrahita Ortiz
40b2e2916b community[minor]: Sambanova llm integration (#20955)
- **Description:** Added [Sambanova systems](https://sambanova.ai/)
integration, including sambaverse and sambastudio LLMs
- **Dependencies:**   sseclient-py  (optional)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 01:05:13 +00:00
Rahul Triptahi
955cf186d2 community[patch]: Ingest source, owner and full_path if present in Document's metadata. (#20949)
Description: The PebbloSafeLoader should first check for owner,
full_path and size in metadata before implementing its own logic.
Dependencies: None
Documentation: NA.

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-26 17:50:57 -07:00
Amine Djeghri
790ea75cf7 community[minor]: add exllamav2 library for GPTQ & EXL2 models (#17817)
Added 3 files : 
- Library : ExLlamaV2 
- Test integration
- Notebook

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 00:44:43 +00:00
Naveen Tatikonda
8bbdb4f6a0 community[patch]: Add OpenSearch as semantic cache (#20254)
### Description
Use OpenSearch vector store as Semantic Cache.

### Twitter Handle
**@OpenSearchProj**

---------

Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
Co-authored-by: Harish Tatikonda <harishtatikonda@Harishs-MacBook-Air.local>
Co-authored-by: EC2 Default User <ec2-user@ip-172-31-31-155.ec2.internal>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 00:20:24 +00:00
Giacomo Berardi
61f14f00d7 docs: ElasticsearchCache in cache integrations documentation (#20790)
The package for LangChain integrations with Elasticsearch
https://github.com/langchain-ai/langchain-elastic is going to contain a
LLM cache integration in the next release (see
https://github.com/langchain-ai/langchain-elastic/pull/14). This is the
documentation contribution on the page dedicated to cache integrations
2024-04-26 15:43:58 -07:00
Mayank Solanki
8c085fc697 community[patch]: Added a function from_existing_collection in Qdrant vector database. (#20779)
Issue: #20514 
The current implementation of `construct_instance` expects a `texts:
List[str]` that will call the embedding function. This might not be
needed when we already have a client with collection and `path, you
don't want to add any text.

This PR adds a class method that returns a qdrant instance with an
existing client.

Here everytime
cb6e5e56c2/libs/community/langchain_community/vectorstores/qdrant.py (L1592)
`construct_instance` is called, this line sends some text for embedding
generation.

---------

Co-authored-by: Anush <anushshetty90@gmail.com>
2024-04-26 15:34:09 -07:00
Leonid Kuligin
893a924b90 core[minor], community[patch], langchain[patch]: move BaseChatLoader to core (#19607)
Thank you for contributing to LangChain!

- [ ] **PR title**: "core: move BaseChatLoader and BaseToolkit from
community"


- [ ] **PR message**: move BaseChatLoader and BaseToolkit

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-26 21:45:51 +00:00
Erick Friis
d4befd0cfb core: fix batch ordering test (#20952) 2024-04-26 21:17:26 +00:00
Eugene Yurtsev
8ed150b2fe cli[minor]: Fix bug to account for name changes (#20948)
* Fix bug to account for name changes / aliases
* Generate migration list from langchain to langchain_core
2024-04-26 15:45:11 -04:00
ccurme
989e4a92c2 (infra) pass input to test-release (#20947) 2024-04-26 15:17:40 -04:00
Eugene Yurtsev
2fa0ff1a2d cli[minor]: update code to generate migrations from langchain to community (#20946)
Updates code that generates migrations from langchain to community
2024-04-26 15:11:32 -04:00
Erick Friis
078c5d9bc6 infra: nonmaster release checkbox (#20945)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-26 14:50:07 -04:00
Leonid Kuligin
d4aec8fc8f docs: adding langchain_google_community to the docs (#20665)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: step1. adjusting langchain_community ->
langchain_google_community"


- [ ] 
- **Description:** step1. adjusting langchain_community ->
langchain_google_community
2024-04-26 18:49:03 +00:00
ccurme
bf16cefd18 langchain: deprecate create_structured_output_runnable (#20933) 2024-04-26 14:00:40 -04:00
Erick Friis
38eccab3ae upstage: release 0.1.3 (#20941) 2024-04-26 10:36:11 -07:00
Sean
e1c2e2fdfa upstage: Upstage Groundedness Check parameter update (#20914)
* Groundedness Check takes `str` or `list[Document]` as input.

* Deprecate `GroundednessCheck` due to its naming.
* Added `UpstageGroundednessCheck`. 

* Hotfix for Groundedness Check parameter. 
  The name `query` was misleading and it should be `answer` instead.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-26 17:34:05 +00:00
ccurme
84b8e67c9c mistral: release 0.1.4 (#20940) 2024-04-26 13:06:02 -04:00
ccurme
465fbaa30b openai: release 0.1.4 (#20939) 2024-04-26 09:56:49 -07:00
Eugene Yurtsev
12c906f6ce cli[minor]: Improve partner migrations (#20938)
This auto generates partner migrations.

At the moment the migration is from community -> partner.

So one would need to run the migration script twice to go from langchain to partner.
2024-04-26 12:30:15 -04:00
Eugene Yurtsev
5653f36adc cli[minor]: Add script to generate migrations for partner packages (#20932)
Add script to help generate migrations.

This works well for partner packages. Migrations are generated based on run time rather than static analysis (much simpler to get the correct migrations implemented).

The script for generating migrations from langchain to community still needs work.
2024-04-26 11:17:20 -04:00
ccurme
fe1304afc4 openai: add unit test (#20931)
Test a helper function that was added earlier.
2024-04-26 15:02:19 +00:00
Eugene Yurtsev
6598757037 cli[minor]: Add first version of migrate (#20902)
Adds a first version of the migrate script.
2024-04-26 10:50:21 -04:00
Pengcheng Liu
d95e9fb67f docs: add tool calling example in Tongyi chat model integration. (#20925)
**Description:** add tool calling example in Tongyi chat model
integration.
  **Issue:** None
  **Dependencies:** None
2024-04-26 10:18:54 -04:00
Lei Zhang
9281841cfe community[patch]: fix integrated test case test_recursive_url_loader.py assertions (issue-20919) (#20920)
**Description:** 
Fix integrated test case test_recursive_url_loader.py

Local testing successful

```shell
(venv) lei@LeideMacBook-Pro community % poetry run pytest tests/integration_tests/document_loaders/test_recursive_url_loader.py
================================================================================ test session starts ================================================================================
platform darwin -- Python 3.11.4, pytest-7.4.4, pluggy-1.4.0 -- /Users/zhanglei/Work/github/langchain/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/zhanglei/Work/github/langchain/libs/community
configfile: pyproject.toml
plugins: syrupy-4.6.1, asyncio-0.20.3, cov-4.1.0, vcr-1.0.2, mock-3.12.0, anyio-3.7.1, dotenv-0.5.2, requests-mock-1.11.0, socket-0.6.0
asyncio: mode=Mode.AUTO
collected 6 items                                                                                                                                                                   

tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader PASSED                                                                 [ 16%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic PASSED                                                   [ 33%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader FAILED                                                                  [ 50%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent PASSED                                                                      [ 66%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_loading_invalid_url PASSED                                                                        [ 83%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties PASSED                                                   [100%]

===================================================================================== FAILURES ======================================================================================
__________________________________________________________________________ test_sync_recursive_url_loader ___________________________________________________________________________

    def test_sync_recursive_url_loader() -> None:
        url = "https://docs.python.org/3.9/"
        loader = RecursiveUrlLoader(
            url, extractor=lambda _: "placeholder", use_async=False, max_depth=2
        )
        docs = loader.load()
>       assert len(docs) == 23
E       AssertionError: assert 24 == 23
E        +  where 24 = len([Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/', 'content_type': 'text/html', 'title': '3.9.18 Documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/py-modindex.html', 'content_type': 'text/html', 'title': 'Python Module Index — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/download.html', 'content_type': 'text/html', 'title': 'Download — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/howto/index.html', 'content_type': 'text/html', 'title': 'Python HOWTOs — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/whatsnew/index.html', 'content_type': 'text/html', 'title': 'Whatâ\x80\x99s New in Python — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/c-api/index.html', 'content_type': 'text/html', 'title': 'Python/C API Reference Manual — Python 3.9.18 documentation', 'language': None}), ...])

tests/integration_tests/document_loaders/test_recursive_url_loader.py:38: AssertionError
================================================================================= warnings summary ==================================================================================
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
  /Users/zhanglei/.pyenv/versions/3.11.4/lib/python3.11/html/parser.py:170: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor.
    k = self.parse_starttag(i)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================ slowest 5 durations ================================================================================
56.75s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
38.99s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
31.20s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
30.37s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
15.44s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
============================================================================== short test summary info ==============================================================================
FAILED tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader - AssertionError: assert 24 == 23
================================================================ 1 failed, 5 passed, 5 warnings in 172.97s (0:02:52) ================================================================
(venv) zhanglei@LeideMacBook-Pro community % poetry run pytest tests/integration_tests/document_loaders/test_recursive_url_loader.py
================================================================================ test session starts ================================================================================
platform darwin -- Python 3.11.4, pytest-7.4.4, pluggy-1.4.0 -- /Users/zhanglei/Work/github/langchain/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/zhanglei/Work/github/langchain/libs/community
configfile: pyproject.toml
plugins: syrupy-4.6.1, asyncio-0.20.3, cov-4.1.0, vcr-1.0.2, mock-3.12.0, anyio-3.7.1, dotenv-0.5.2, requests-mock-1.11.0, socket-0.6.0
asyncio: mode=Mode.AUTO
collected 6 items                                                                                                                                                                   

tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader PASSED                                                                 [ 16%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic PASSED                                                   [ 33%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader PASSED                                                                  [ 50%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent PASSED                                                                      [ 66%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_loading_invalid_url PASSED                                                                        [ 83%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties PASSED                                                   [100%]

================================================================================= warnings summary ==================================================================================
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
  /Users/zhanglei/.pyenv/versions/3.11.4/lib/python3.11/html/parser.py:170: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor.
    k = self.parse_starttag(i)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================ slowest 5 durations ================================================================================
46.99s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
32.43s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
31.23s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
30.75s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
15.89s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
===================================================================== 6 passed, 5 warnings in 157.42s (0:02:37) =====================================================================
(venv) lei@LeideMacBook-Pro community % 
```

**Issue:** https://github.com/langchain-ai/langchain/issues/20919

**Twitter handle:** @coolbeevip
2024-04-26 10:00:08 -04:00
ccurme
7d8d0229fa remove placeholder error message (#20340) 2024-04-26 13:48:48 +00:00
William FH
4c437ebb9c Use lstv2 (#20747) 2024-04-25 16:51:42 -07:00
ccurme
891ae37437 langchain: support PineconeVectorStore in self query retriever (#20905)
`langchain_pinecone.Pinecone` is deprecated in favor of
`PineconeVectorStore`, and is currently a subclass of
`PineconeVectorStore`.
```python
@deprecated(since="0.0.3", removal="0.2.0", alternative="PineconeVectorStore")
class Pinecone(PineconeVectorStore):
    """Deprecated. Use PineconeVectorStore instead."""

    pass
```
2024-04-25 20:54:58 +00:00
Matt
28df4750ef community[patch]: Add initial tests for AzureSearch vector store (#17663)
**Description:** AzureSearch vector store has no tests. This PR adds
initial tests to validate the code can be imported and used.
**Issue:** N/A
**Dependencies:** azure-search-documents and azure-identity are added as
optional dependencies for testing

---------

Co-authored-by: Matt Gotteiner <[email protected]>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 20:42:01 +00:00
Dristy Srivastava
5f1d1666e3 community[patch]: Add support for pebblo server and client version (#20269)
**Description**:
_PebbloSafeLoader_: Add support for pebblo server and client version


**Documentation:** NA
**Unit test:** NA
**Issue:** NA
**Dependencies:**  None

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 20:39:17 +00:00
am-kinetica
b54b19ba1c community[minor]: Implemented Kinetica Document Loader and added notebooks (#20002)
- [ ] **Kinetica Document Loader**: "community: a class to load
Documents from Kinetica"



- [ ] **Kinetica Document Loader**: 
- **Description:** implemented KineticaLoader in `kinetica_loader.py`
- **Dependencies:** install the Kinetica API using `pip install
gpudb==7.2.0.1 `
2024-04-25 13:39:00 -07:00
Michael Schock
5e60d65917 experimental[patch]: return from HuggingGPT task executor task.run() exception (#20219)
**Description:** Fixes a bug in the HuggingGPT task execution logic
here:

      except Exception as e:
          self.status = "failed"
          self.message = str(e)
      self.status = "completed"
      self.save_product()

where a caught exception effectively just sets `self.message` and can
then throw an exception if, e.g., `self.product` is not defined.

**Issue:** None that I'm aware of.
**Dependencies:** None
**Twitter handle:** https://twitter.com/michaeljschock

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 20:16:39 +00:00
Anish Chakraborty
898362de81 core[patch]: improve comma separated list output parser to handle non-space separated list (#20434)
- **Description:** Changes
`lanchain_core.output_parsers.CommaSeparatedListOutputParser` to handle
`,` as a delimiter alongside the previous implementation which used `, `
as delimiter.
- **Issue:** Started noticing that some results returned by LLMs were
not getting parsed correctly when the output contained `,` instead of `,
`.
  - **Dependencies:** No
  - **Twitter handle:** not active on twitter.


<!---
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
-->
2024-04-25 20:10:56 +00:00
Michael Schock
63a07f52df experimental[patch]: remove \n from AutoGPT feedback_tool exit check (#20132) 2024-04-25 20:10:33 +00:00
Shengsheng Huang
fd1061e7bf community[patch]: add more data types support to ipex-llm llm integration (#20833)
- **Description**:  
- **add support for more data types**: by default `IpexLLM` will load
the model in int4 format. This PR adds more data types support such as
`sym_in5`, `sym_int8`, etc. Data formats like NF3, NF4, FP4 and FP8 are
only supported on GPU and will be added in future PR.
    - Fix a small issue in saving/loading, update api docs
- **Dependencies**: `ipex-llm` library
- **Document**: In `docs/docs/integrations/llms/ipex_llm.ipynb`, added
instructions for saving/loading low-bit model.
- **Tests**: added new test cases to
`libs/community/tests/integration_tests/llms/test_ipex_llm.py`, added
config params.
- **Contribution maintainer**: @shane-huang
2024-04-25 12:58:18 -07:00
Rahul Triptahi
dc921f0823 community[patch]: Add semantic info to metadata, classified by pebblo-server. (#20468)
Description: Add support for Semantic topics and entities.
Classification done by pebblo-server is not used to enhance metadata of
Documents loaded by document loaders.
Dependencies: None
Documentation: Updated.

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-25 12:55:33 -07:00
Eugene Yurtsev
a5028b6356 cli[minor]: Add __version__ (#20903)
Add __version__ to cli
2024-04-25 15:51:33 -04:00
Jingpan Xiong
1202017c56 community[minor]: Add relyt vector database (#20316)
Co-authored-by: kaka <kaka@zbyte-inc.cloud>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: jingsi <jingsi@leadincloud.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 19:49:29 +00:00
davidefantiniIntel
f386f71bb3 community: fix tqdm import (#20263)
Description: Fix tqdm import in QuantizedBiEncoderEmbeddings
2024-04-25 19:44:53 +00:00
Andres Algaba
05ae8ca7d4 community[patch]: deprecate persist method in Chroma (#20855)
Thank you for contributing to LangChain!

- [x] **PR title**

- [x] **PR message**:
- **Description:** Deprecate persist method in Chroma no longer exists
in Chroma 0.4.x
    - **Issue:** #20851 
    - **Dependencies:** None
    - **Twitter handle:** AndresAlgaba1

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 19:42:03 +00:00
ccurme
fdabd3cdf5 mistral, openai: support custom tokenizers in chat models (#20901) 2024-04-25 15:23:29 -04:00
ccurme
6986e44959 docs: update chat model feature table (#20899) 2024-04-25 15:05:43 -04:00
ccurme
b8db73233c core, community: deprecate tool.__call__ (#20900)
Does not update docs.
2024-04-25 14:50:39 -04:00
merdan
52896258ee docs: hide model import in multiple_tools.ipynb (#20883)
**Description:** 
This PR removes an unnecessary code snippet from the documentation. The
snippet in question is not relevant to the content and does not
contribute to the overall understanding of the topic. It contained
redundant imports and unused code, potentially causing confusion for
readers.

**Issue:** 
There is no specific issue number associated with this change.

**Dependencies:** 
No additional dependencies are required for this change.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 18:47:22 +00:00
Tomaz Bratanic
520972fd0f community[patch]: Support passing graph object to Neo4j integrations (#20876)
For driver connection reusage, we introduce passing the graph object to
neo4j integrations
2024-04-25 11:30:22 -07:00
Lei Zhang
748a6ae609 community[patch]: add HTTP response headers Content-Type to metadata of RecursiveUrlLoader document (#20875)
**Description:** 
The RecursiveUrlLoader loader offers a link_regex parameter that can
filter out URLs. However, this filtering capability is limited, and if
the internal links of the website change, unexpected resources may be
loaded. These resources, such as font files, can cause problems in
subsequent embedding processing.

>
https://blog.langchain.dev/assets/fonts/source-sans-pro-v21-latin-ext_latin-regular.woff2?v=0312715cbf

We can add the Content-Type in the HTTP response headers to the document
metadata so developers can choose which resources to use. This allows
developers to make their own choices.

For example, the following may be a good choice for text knowledge.

- text/plain - simple text file
- text/html - HTML web page
- text/xml - XML format file
- text/json - JSON format data
- application/pdf - PDF file
- application/msword - Word document

and ignore the following

- text/css - CSS stylesheet
- text/javascript - JavaScript script
- application/octet-stream - binary data
- image/jpeg - JPEG image
- image/png - PNG image
- image/gif - GIF image
- image/svg+xml - SVG image
- audio/mpeg - MPEG audio files
- video/mp4 - MP4 video file
- application/font-woff - WOFF font file
- application/font-ttf - TTF font file
- application/zip - ZIP compressed file
- application/octet-stream - binary data

**Twitter handle:** @coolbeevip

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 11:29:41 -07:00
samanhappy
37cbbc00a9 docs: Fix broken link in agents.ipynb (#20872) 2024-04-25 10:42:06 -07:00
fzowl
a6b8ff23bd docs: Use voyage-law-2 in the examples (#20784)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


**Description:** In VoyageAI text-embedding examples use voyage-law-2
model


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-25 10:41:36 -07:00
Erick Friis
eca3640af7 upstage: release 0.1.2 (#20898) 2024-04-25 10:41:19 -07:00
Pavlo Paliychuk
82b5bdc7a1 docs: Fix misplaced zep cloud example links (#20867)
Thank you for contributing to LangChain!

- [x] **PR title**: Fix misplaced zep cloud example links
- [x] **PR message**: 
- **Description:** Fixes misplaced links for vector store and memory zep
cloud examples

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-25 10:41:08 -07:00
Joan Fontanals
baefbfb14e community[mionr]: add Jina Reranker in retrievers module (#19406)
- **Description:** Adapt JinaEmbeddings to run with the new Jina AI
Rerank API
- **Twitter handle:** https://twitter.com/JinaAI_


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 10:27:10 -07:00
Erick Friis
92969d49cb multiple: remove external repo mds (#20896)
api docs build doesn't tolerate them
2024-04-25 17:18:29 +00:00
Jason_Chen
53bb7dbd29 community[patch]: add BeautifulSoupTransformer remove_unwanted_classnames method (#20467)
Add the remove_unwanted_classnames method to the
BeautifulSoupTransformer class, which can filter more effectively.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 17:04:04 +00:00
YISH
ed26149a29 openai[patch]: Allow disablling safe_len_embeddings(OpenAIEmbeddings) (#19743)
OpenAI API compatible server may not support `safe_len_embedding`, 

use `disable_safe_len_embeddings=True` to disable it.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 09:45:52 -07:00
Bagatur
5b83130855 core[minor], langchain[patch], community[patch]: mv StructuredQuery (#20849)
mv StructuredQuery to core
2024-04-25 09:40:26 -07:00
Sean
540f384197 partner: Upstage quick documentation update (#20869)
* Updating the provider docs page. 
The RAG example was meant to be moved to cookbook, but was merged by
mistake.

* Fix bug in Groundedness Check

---------

Co-authored-by: JuHyung-Son <sonju0427@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-25 16:36:54 +00:00
Bagatur
ffad3985a1 core[patch]: Release 0.1.46 (#20891) 2024-04-25 15:40:17 +00:00
Mish Ushakov
6ccecf2363 community[minor]: added Browserbase loader (#20478) 2024-04-25 01:11:03 +00:00
aditya thomas
9e694963a4 docs: custom callback handlers page (#20494)
**Description:** Update to the Callbacks page on custom callback
handlers
**Issue:** #20493 
**Dependencies:** None

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 01:08:36 +00:00
Erick Friis
5da9dd1195 mistral: comment batching param (#20868)
Addresses #20523
2024-04-25 00:38:21 +00:00
Ivaylo Bratoev
7c5063ef60 infra: fix how Poetry is installed in the dev container (#20521)
Currently, when a new dev container is created, poetry does not work in
it with the error "No module named 'rapidfuzz'".

Install Poetry outside the project venv so that poetry and project
dependencies do not get mixed. Use pipx to install poetry securely in
its own isolated environment.

Issue: #12237

Twitter handle: https://twitter.com/ibratoev

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 17:33:25 -07:00
GustavoSept
c2d09a5186 experimental[patch]: Makes regex customizable in text_splitter.py (SemanticChunker class) (#20485)
- **Description:** Currently, the regex is static (`r"(?<=[.?!])\s+"`),
which is only useful for certain use cases. The current change only
moves this to be a parameter of split_text(). Which adds flexibility
without making it more complex (as the default regex is still the same).
- **Issue:** Not applicable (I searched, no one seems to have created
this issue yet).
  - **Dependencies:** None.


_If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17._

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 00:32:40 +00:00
William FH
a936f696a6 [Core] Feat: update config CVar in tool.invoke (#20808) 2024-04-24 17:17:21 -07:00
Lei Zhang
2cd907ad7e text-splitters[patch]: fix MarkdownHeaderTextSplitter fails to parse headers with non-printable characters (#20645)
Description: MarkdownHeaderTextSplitter Fails to Parse Headers with
non-printable characters. more #20643

The following is the official test case. Just replacing `# Foo\n\n` with
`\ufeff# Foo\n\n` will cause the test case to fail.

chunk metadata is empty

```python
def test_md_header_text_splitter_1() -> None:
    """Test markdown splitter by header: Case 1."""

    markdown_document = (
        "\ufeff# Foo\n\n"
        "    ## Bar\n\n"
        "Hi this is Jim\n\n"
        "Hi this is Joe\n\n"
        " ## Baz\n\n"
        " Hi this is Molly"
    )
    headers_to_split_on = [
        ("#", "Header 1"),
        ("##", "Header 2"),
    ]
    markdown_splitter = MarkdownHeaderTextSplitter(
        headers_to_split_on=headers_to_split_on,
    )
    output = markdown_splitter.split_text(markdown_document)
    expected_output = [
        Document(
            page_content="Hi this is Jim  \nHi this is Joe",
            metadata={"Header 1": "Foo", "Header 2": "Bar"},
        ),
        Document(
            page_content="Hi this is Molly",
            metadata={"Header 1": "Foo", "Header 2": "Baz"},
        ),
    ]
    assert output == expected_output
```

twitter: @coolbeevip

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 00:07:42 +00:00
jtanios
2968f20970 docs: git dependency name correction (#20662)
This PR corrects the name of the `git` python package to `GitPython`.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 23:43:44 +00:00
ccurme
481d3855dc patch: remove usage of llm, chat model __call__ (#20788)
- `llm(prompt)` -> `llm.invoke(prompt)`
- `llm(prompt=prompt` -> `llm.invoke(prompt)` (same with `messages=`)
- `llm(prompt, callbacks=callbacks)` -> `llm.invoke(prompt,
config={"callbacks": callbacks})`
- `llm(prompt, **kwargs)` -> `llm.invoke(prompt, **kwargs)`
2024-04-24 19:39:23 -04:00
Raghav Dixit
9b7fb381a4 community[patch]: LanceDB integration patch update (#20686)
Description : 

- added functionalities - delete, index creation, using existing
connection object etc.
- updated usage 
- Added LaceDB cloud OSS support

make lint_diff , make test checks done
2024-04-24 16:27:43 -07:00
Nikita Pokidyshev
9e983c9500 langchain[patch]: fix agent_token_buffer_memory not working with openai tools (#20708)
- **Description:** fix a bug in the agent_token_buffer_memory
- **Issue:** agent_token_buffer_memory was not working with openai tools
- **Dependencies:** None
- **Twitter handle:** @pokidyshef
2024-04-24 15:51:58 -07:00
Salika Dave
6353991498 docs: [Retrieval > .. > PDF] update package installation instructions for Unstructured and PDFMiner (#20723)
**Description:** Adds the command to install packages required before
using _Unstructured_ and _PDFMiner_ from `langchain.community`
**Documentation Page Being Updated:** [LangChain > Retrieval > Document
loaders > PDF > Using
Unstructured](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf/#using-unstructured)
**Issue:** #20719 
**Dependencies:** no dependencies
**Twitter handle:** SalikaDave

<!--
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17. -->

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 22:24:11 +00:00
dpdjvhxm
a9e2e98708 docs: Update apache_age.ipynb (#20722)
typo
2024-04-24 22:18:59 +00:00
Erick Friis
1aef8116de upstage: release 0.1.1 (#20864) 2024-04-24 15:18:30 -07:00
junkeon
c8fd51e8c8 upstage: Add Upstage partner package LA and GC (#20651)
---------

Co-authored-by: Sean <chosh0615@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Sean Cho <sean@upstage.ai>
2024-04-24 15:17:20 -07:00
hsmtkk
5ecebf168c docs: imported List is not used (#20720)
# Description

Minor sample code fix

# Issue

Imported `List` is not used.

# Dependencies

N/A

# Twitter handle

N/A
2024-04-24 15:17:07 -07:00
Alex Lee
243ba71b28 langchain[patch]: add aprep_output method to langchain/chains/base.py (#20748)
## Description

Add `aprep_output` method to `langchain/chains/base.py`. Some downstream
`ChatMessageHistory` objects that use async connections require an async
way to append to the context.

It turned out that `ainvoke()` was calling `prep_output` which is
synchronous.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 22:16:25 +00:00
Harrison Chase
43c041cda5 support messages in messages out (#20862) 2024-04-24 14:58:58 -07:00
back2nix
a1614b88ac groq[patch]: groq proxy support (#20758)
# Proxy Fix for Groq Class 🐛 🚀

## Description
This PR fixes a bug related to proxy settings in the `Groq` class,
allowing users to connect to LangChain services via a proxy.

## Changes Made
-  FIX support for specifying proxy settings in the `Groq` class.
-  Resolved the bug causing issues with proxy settings.
-  Did not include unit tests and documentation updates.
-  Did not run make format, make lint, and make test to ensure code
quality and functionality because I couldn't get it to run, so I don't
program in Python and couldn't run `ruff`.
-  Ensured that the changes are backwards compatible.
-  No additional dependencies were added to `pyproject.toml`.

### Error Before Fix
```python
Traceback (most recent call last):
  File "/home/bg/Documents/code/github.com/back2nix/test/groq/main.py", line 9, in <module>
    chat = ChatGroq(
           ^^^^^^^^^
  File "/home/bg/Documents/code/github.com/back2nix/test/groq/venv310/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/bg/Documents/code/github.com/back2nix/test/groq/venv310/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatGroq
__root__
  Invalid `http_client` argument; Expected an instance of `httpx.AsyncClient` but got <class 'httpx.Client'> (type=type_error)
  ```
  
### Example usage after fix
  ```python3
import os

import httpx
from langchain_core.prompts import ChatPromptTemplate
from langchain_groq import ChatGroq

chat = ChatGroq(
    temperature=0,
    groq_api_key=os.environ.get("GROQ_API_KEY"),
    model_name="mixtral-8x7b-32768",
    http_client=httpx.Client(
        proxies="socks5://127.0.0.1:1080",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
    http_async_client=httpx.AsyncClient(
        proxies="socks5://127.0.0.1:1080",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
)

system = "You are a helpful assistant."
human = "{text}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])

chain = prompt | chat
out = chain.invoke({"text": "Explain the importance of low latency LLMs"})

print(out)
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-24 21:58:03 +00:00
volodymyr-memsql
493afe4d8d community[patch]: add hybrid search to singlestoredb vectorstore (#20793)
Implemented the ability to enable full-text search within the
SingleStore vector store, offering users a versatile range of search
strategies. This enhancement allows users to seamlessly combine
full-text search with vector search, enabling the following search
strategies:

* Search solely by vector similarity.
* Conduct searches exclusively based on text similarity, utilizing
Lucene internally.
* Filter search results by text similarity score, with the option to
specify a threshold, followed by a search based on vector similarity.
* Filter results by vector similarity score before conducting a search
based on text similarity.
* Perform searches using a weighted sum of vector and text similarity
scores.

Additionally, integration tests have been added to comprehensively cover
all scenarios.
Updated notebook with examples.

CC: @baskaryan, @hwchase17

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-24 21:34:50 +00:00
Tomaz Bratanic
9efab3ed66 community[patch]: Add driver config param for neo4j graph (#20772)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-24 21:14:41 +00:00
Leonid Ganeline
13751c3297 community: tigergraph fixes (#20034)
- added guard on the `pyTigerGraph` import
- added a missed example page in the `docs/integrations/graphs/`
- formatted the `docs/integrations/providers/` page to the consistent
format. Added links.
2024-04-24 16:49:21 -04:00
Martin Kolb
0186e4e633 community[patch]: Advanced filtering for HANA Cloud Vector Engine (#20821)
- **Description:**
This PR adds support for advanced filtering to the integration of HANA
Vector Engine.
The newly supported filtering operators are: $eq, $ne, $gt, $gte, $lt,
$lte, $between, $in, $nin, $like, $and, $or

  - **Issue:** N/A
  - **Dependencies:** no new dependencies added

Added integration tests to:
`libs/community/tests/integration_tests/vectorstores/test_hanavector.py`

Description of the new capabilities in notebook:
`docs/docs/integrations/vectorstores/hanavector.ipynb`
2024-04-24 13:47:27 -07:00
Alex Sherstinsky
12e5ec6de3 community: Support both Predibase SDK-v1 and SDK-v2 in Predibase-LangChain integration (#20859) 2024-04-24 13:31:01 -07:00
Erick Friis
8c95ac3145 docs, multiple: de-beta with_structured_output (#20850) 2024-04-24 19:34:57 +00:00
Nuno Campos
477eb1745c Better support for subgraphs in graph viz (#20840) 2024-04-24 12:32:52 -07:00
aditya thomas
a9c7d47c03 docs: update openai llm documentation (#20827)
**Description:** Bring OpenAI LLM page to the LCEL era
**Issue:** See discussion #20810
**Dependencies:** None
2024-04-24 12:26:57 -07:00
JeffKatzy
5ab3f9a995 community[patch]: standardize chat init args (#20844)
Thank you for contributing to LangChain!

community:perplexity[patch]: standardize init args

updated pplx_api_key and request_timeout so that aliased to api_key, and
timeout respectively. Added test that both continue to set the same
underlying attributes.

Related to
[20085](https://github.com/langchain-ai/langchain/issues/20085)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 12:26:05 -07:00
Pavlo Paliychuk
70ae59bcfe docs: Update Zep Messaging, add links to Zep Cloud Docs (#20848)
Thank you for contributing to LangChain!

- [x] **PR title**: docs: Update Zep Messaging, add links to Zep Cloud
Docs

- [x] **PR message**: 
- **Description:** This PR updates Zep messaging in the docs + links to
Langchain Zep Cloud examples in our documentation
    - **Twitter handle:** @paulpaliychuk51


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-24 19:14:54 +00:00
Massimiliano Pronesti
8d1167b32f community[patch]: add support for similarity_score_threshold search in… (#20852)
See
https://github.com/langchain-ai/langchain/issues/20600#issuecomment-2075569338
for details.

@chrislrobert
2024-04-24 19:14:33 +00:00
Bagatur
87d31a3ec0 docs: contributing note (#20843) 2024-04-24 10:41:19 -07:00
Eugene Yurtsev
d8aa72f51d core[minor],langchain[patch]: Move base indexing interface and logic to core (#20667)
This PR moves the interface and the logic to core.

The following changes to namespaces:


`indexes` -> `indexing`
`indexes._api` -> `indexing.api`


Testing code is intentionally duplicated for now since it's testing
different
implementations of the record manager (in-memory vs. SQL).

Common logic will need to be pulled out into the test client.


A follow up PR will move the SQL based implementation outside of
LangChain.
2024-04-24 13:18:42 -04:00
ccurme
3bcfbcc871 groq: handle null queue_time (#20839) 2024-04-24 09:50:09 -07:00
Eugene Yurtsev
30e48c9878 core[patch],community[patch]: Move file chat history back to community (#20834)
Marking as patch since we haven't had releases in between. This just reverting part of a PR from yesterday.
2024-04-24 12:47:25 -04:00
ccurme
6debadaa70 groq: bump core (#20838) 2024-04-24 11:51:46 -04:00
Erick Friis
7984206c95 groq: release 0.1.3 (#20836)
Fixes #20811
2024-04-24 08:06:06 -07:00
Nestor Qin
9111d3a636 community[patch]: Fix message formatting for Anthropic models on Amazon Bedrock (#20801)
**Description:**
This PR fixes an issue in message formatting function for Anthropic
models on Amazon Bedrock.

Currently, LangChain BedrockChat model will crash if it uses Anthropic
models and the model return a message in the following type:
- `AIMessageChunk`

Moreover, when use BedrockChat with for building Agent, the following
message types will trigger the same issue too:
- `HumanMessageChunk`
- `FunctionMessage`

**Issue:**
https://github.com/langchain-ai/langchain/issues/18831

**Dependencies:**
No.

**Testing:**
Manually tested. The following code was failing before the patch and
works after.

```
@tool
def square_root(x: str):
    "Useful when you need to calculate the square root of a number"
    return math.sqrt(int(x))

llm = ChatBedrock(
    model_id="anthropic.claude-3-sonnet-20240229-v1:0",
    model_kwargs={ "temperature": 0.0 },
)

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", FUNCTION_CALL_PROMPT),
        ("human", "Question: {user_input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ]
)

tools = [square_root]
tools_string = format_tool_to_anthropic_function(square_root)

agent = (
        RunnablePassthrough.assign(
            user_input=lambda x: x['user_input'],
            agent_scratchpad=lambda x: format_to_openai_function_messages(
                x["intermediate_steps"]
            )
        )
        | prompt
        | llm
        | AnthropicFunctionsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, return_intermediate_steps=True)
output = agent_executor.invoke({
    "user_input": "What is the square root of 2?",
    "tools_string": tools_string,
})
```
List of messages returned from Bedrock:
```
<SystemMessage> content='You are a helpful assistant.'
<HumanMessage> content='Question: What is the square root of 2?'
<AIMessageChunk> content="Okay, let's calculate the square root of 2.<scratchpad>\nTo calculate the square root of a number, I can use the square_root tool:\n\n<function_calls>\n  <invoke>\n    <tool_name>square_root</tool_name>\n    <parameters>\n      <__arg1>2</__arg1>\n    </parameters>\n  </invoke>\n</function_calls>\n</scratchpad>\n\n<function_results>\n<search_result>\nThe square root of 2 is approximately 1.414213562373095\n</search_result>\n</function_results>\n\n<answer>\nThe square root of 2 is approximately 1.414213562373095\n</answer>" id='run-92363df7-eff6-4849-bbba-fa16a1b2988c'"
<FunctionMessage> content='1.4142135623730951' name='square_root'
```
2024-04-23 22:40:39 +00:00
ccurme
06b04b80b8 groq: fix warning filter for integration test (#20806) 2024-04-23 18:11:41 -04:00
ccurme
5a3c65a756 standard tests: add xfails (#20659) 2024-04-23 17:14:16 -04:00
Erick Friis
ddc2274aea standard-tests: split tool calling test (#20803)
just making it a bit easier to grok
2024-04-23 20:59:45 +00:00
ccurme
6622829c67 mistral: catch GatedRepoError, release 0.1.3 (#20802)
https://github.com/langchain-ai/langchain/issues/20618

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-23 20:56:42 +00:00
Eugene Yurtsev
a7c347ab35 langchain[patch]: Update evaluation logic that instantiates a default LLM (#20760)
Favor langchain_openai over langchain_community for evaluation logic.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-23 16:09:32 -04:00
Eugene Yurtsev
72f720fa38 langchain[major]: Remove default instantations of LLMs from VectorstoreToolkit (#20794)
Remove default instantiation from vectorstore toolkit.
2024-04-23 16:09:14 -04:00
ccurme
42de5168b1 langchain: deprecate LLMChain, RetrievalQA, and ConversationalRetrievalChain (#20751) 2024-04-23 15:55:34 -04:00
Erick Friis
30c7951505 core: use qualname in beta message (#20361) 2024-04-23 11:20:13 -07:00
Aliaksandr Kuzmik
5560cc448c community[patch]: fix CometTracer bug (#20796)
Hi! My name is Alex, I'm an SDK engineer from
[Comet](https://www.comet.com/site/)

This PR updates the `CometTracer` class.

Fixed an issue when `CometTracer` failed while logging the data to Comet
because this data is not JSON-encodable.

The problem was in some of the `Run` attributes that could contain
non-default types inside, now these attributes are taken not from the
run instance, but from the `run.dict()` return value.
2024-04-23 13:24:41 -04:00
Eugene Yurtsev
1c89e45c14 langchain[major]: breaks some chains to remove hidden defaults (#20759)
Breaks some chains in langchain to remove hidden chat model / llm instantiation.
2024-04-23 11:11:40 -04:00
Eugene Yurtsev
ad6b5f84e5 community[patch],core[minor]: Move in memory cache implementation to core (#20753)
This PR moves the InMemoryCache implementation from community to core.
2024-04-23 11:10:11 -04:00
Stefano Ottolenghi
4f67ce485a docs: Fix typo to render list (#20774)
This _should_ fix the currently broken list in the [Neo4jVector
page](https://python.langchain.com/docs/integrations/vectorstores/neo4jvector/).

![Screenshot from 2024-04-23
08-40-37](https://github.com/langchain-ai/langchain/assets/114478074/ab5ad622-879e-4764-93db-5f502eae479b)
2024-04-23 14:46:58 +00:00
Eugene Yurtsev
a2cc9b55ba core[patch]: Remove autoupgrade to addable dict in Runnable/RunnableLambda/RunnablePassthrough transform (#20677)
Causes an issue for this code

```python
from langchain.chat_models.openai import ChatOpenAI
from langchain.output_parsers.openai_tools import JsonOutputToolsParser
from langchain.schema import SystemMessage

prompt = SystemMessage(content="You are a nice assistant.") + "{question}"

llm = ChatOpenAI(
    model_kwargs={
        "tools": [
            {
                "type": "function",
                "function": {
                    "name": "web_search",
                    "description": "Searches the web for the answer to the question.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "query": {
                                "type": "string",
                                "description": "The question to search for.",
                            },
                        },
                    },
                },
            }
        ],
    },
    streaming=True,
)

parser = JsonOutputToolsParser(first_tool_only=True)

llm_chain = prompt | llm | parser | (lambda x: x)


for chunk in llm_chain.stream({"question": "tell me more about turtles"}):
    print(chunk)

# message = llm_chain.invoke({"question": "tell me more about turtles"})

# print(message)
```

Instead by definition, we'll assume that RunnableLambdas consume the
entire stream and that if the stream isn't addable then it's the last
message of the stream that's in the usable format.

---

If users want to use addable dicts, they can wrap the dict in an
AddableDict class.

---

Likely, need to follow up with the same change for other places in the
code that do the upgrade
2024-04-23 10:35:06 -04:00
Oleksandr Yaremchuk
9428923bab experimental[minor]: upgrade the prompt injection model (#20783)
- **Description:** In January, Laiyer.ai became part of ProtectAI, which
means the model became owned by ProtectAI. In addition to that,
yesterday, we released a new version of the model addressing issues the
Langchain's community and others mentioned to us about false-positives.
The new model has a better accuracy compared to the previous version,
and we thought the Langchain community would benefit from using the
[latest version of the
model](https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2).
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** @alex_yaremchuk
2024-04-23 10:23:39 -04:00
Eugene Yurtsev
645b1e142e core[minor],langchain[patch],community[patch]: Move InMemory and File implementations of Chat History to core (#20752)
This PR moves the implementations for chat history to core. So it's
easier to determine which dependencies need to be broken / add
deprecation warnings
2024-04-23 10:22:11 -04:00
ccurme
7a922f3e48 core, openai: support custom token encoders (#20762) 2024-04-23 13:57:05 +00:00
Chen94yue
b481b73805 Update custom_retriever.ipynb (#20776)
Fixed an error in the sample code to ensure that the code can run
directly.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-23 13:47:08 +00:00
Bagatur
ed980601e1 docs: update examples in api ref (#20768) 2024-04-23 00:47:52 +00:00
Bagatur
be51cd3bc9 docs: fix api ref link autogeneration (#20766) 2024-04-22 17:36:41 -07:00
monke111
c807f0a6dd Update google_drive.ipynb (#20731)
langchain_community.document_loaders depricated 
new langchain_google_community

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-22 23:30:46 +00:00
Katarina Supe
dc61e23886 docs: update Memgraph docs (#20736)
- **Description:** Memgraph Platform is being run differently now so I
updated this (I am DX engineer from Memgraph).
2024-04-22 19:27:12 -04:00
Tabish Mir
6a0d44d632 docs: Fix link for partition_pdf in Semi_Structured_RAG.ipynb cookbook (#20763)
docs: Fix link for `partition_pdf` in Semi_Structured_RAG.ipynb cookbook

- **Description:** Fix incorrect link to unstructured-io `partition_pdf`
section
2024-04-22 23:22:55 +00:00
Bagatur
fa4d6f9f8b docs: install partner pkgs vercel (#20761) 2024-04-22 23:08:02 +00:00
Christophe Bornet
0ae5027d98 community[patch]: Remove usage of deprecated StoredBlobHistory in CassandraChatMessageHistory (#20666) 2024-04-22 17:11:05 -04:00
Bagatur
eb18f4e155 infra: rm sep repo partner dirs (#20756)
so you can `poetry run pip install -e libs/partners/*/` to your hearts
content
2024-04-22 14:05:39 -07:00
Bagatur
2a11a30572 docs: automatically add api ref links (#20755)
![Screenshot 2024-04-22 at 1 51 13
PM](https://github.com/langchain-ai/langchain/assets/22008038/b8b09fec-3800-4b97-bd26-5571b8308f4a)
2024-04-22 14:05:29 -07:00
1881 changed files with 86817 additions and 17115 deletions

View File

@@ -12,7 +12,7 @@
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
"workspaceFolder": "/workspaces/langchain",
// Prevent the container from shutting down
"overrideCommand": true

View File

@@ -6,7 +6,7 @@ services:
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces:cached
- ..:/workspaces/langchain:cached
networks:
- langchain-network
# environment:

View File

@@ -13,6 +13,11 @@ on:
required: true
type: string
default: 'libs/langchain'
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
env:
PYTHON_VERSION: "3.11"
@@ -20,7 +25,7 @@ env:
jobs:
build:
if: github.ref == 'refs/heads/master'
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
environment: Scheduled testing
runs-on: ubuntu-latest
@@ -75,6 +80,7 @@ jobs:
./.github/workflows/_test_release.yml
with:
working-directory: ${{ inputs.working-directory }}
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
secrets: inherit
pre-release-checks:
@@ -112,7 +118,7 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
# Here we use:
# - The default regular PyPI index as the *primary* index, meaning
# - The default regular PyPI index as the *primary* index, meaning
# that it takes priority (https://pypi.org/simple)
# - The test PyPI index as an extra index, so that any dependencies that
# are not found on test PyPI can be resolved and installed anyway.
@@ -171,7 +177,7 @@ jobs:
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install $MIN_VERSIONS
poetry run pip install --force-reinstall $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
@@ -291,14 +297,13 @@ jobs:
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Create Release
- name: Create Tag
uses: ncipollo/release-action@v1
if: ${{ inputs.working-directory == 'libs/langchain' }}
with:
artifacts: "dist/*"
token: ${{ secrets.GITHUB_TOKEN }}
draft: false
generateReleaseNotes: true
tag: v${{ needs.build.outputs.version }}
commit: master
generateReleaseNotes: false
tag: ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}
body: "# Release ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}\n\nPackage-specific release note generation coming soon."
commit: ${{ github.sha }}

View File

@@ -7,6 +7,11 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
env:
POETRY_VERSION: "1.7.1"
@@ -14,7 +19,7 @@ env:
jobs:
build:
if: github.ref == 'refs/heads/master'
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
runs-on: ubuntu-latest
outputs:

View File

@@ -3,9 +3,9 @@ name: CI / cd . / make spell_check
on:
push:
branches: [master]
branches: [master, v0.1]
pull_request:
branches: [master]
branches: [master, v0.1]
permissions:
contents: read

View File

@@ -19,11 +19,11 @@ jobs:
working-directory:
- "libs/partners/openai"
- "libs/partners/anthropic"
# - "libs/partners/ai21" # standard-tests broken
- "libs/partners/ai21"
- "libs/partners/fireworks"
# - "libs/partners/groq" # rate-limited
- "libs/partners/groq"
- "libs/partners/mistralai"
# - "libs/partners/together" # rate-limited
- "libs/partners/together"
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
steps:
- uses: actions/checkout@v4

View File

@@ -17,16 +17,11 @@ clean: docs_clean api_docs_clean
## docs_build: Build the documentation.
docs_build:
docs/.local_build.sh
cd docs && make build-local
## docs_clean: Clean the documentation build artifacts.
docs_clean:
@if [ -d _dist ]; then \
rm -r _dist; \
echo "Directory _dist has been cleaned."; \
else \
echo "Nothing to clean."; \
fi
cd docs && make clean
## docs_linkcheck: Run linkchecker on the documentation.
docs_linkcheck:
@@ -60,12 +55,12 @@ spell_fix:
## lint: Run linting on the project.
lint lint_package lint_tests:
poetry run ruff docs templates cookbook
poetry run ruff check docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff --select I docs templates cookbook
poetry run ruff check --select I docs templates cookbook
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
## format: Format the project files.
format format_diff:
poetry run ruff format docs templates cookbook
poetry run ruff --select I --fix docs templates cookbook
poetry run ruff check --select I --fix docs templates cookbook

View File

@@ -47,7 +47,7 @@ For these applications, LangChain simplifies the entire application lifecycle:
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[LangGraph](https://python.langchain.com/docs/langgraph)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
- **[`LangGraph`](https://python.langchain.com/docs/langgraph)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
### Productionization:
- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.

View File

@@ -464,8 +464,8 @@
" Check if the base64 data is an image by looking at the start of the data\n",
" \"\"\"\n",
" image_signatures = {\n",
" b\"\\xFF\\xD8\\xFF\": \"jpg\",\n",
" b\"\\x89\\x50\\x4E\\x47\\x0D\\x0A\\x1A\\x0A\": \"png\",\n",
" b\"\\xff\\xd8\\xff\": \"jpg\",\n",
" b\"\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a\": \"png\",\n",
" b\"\\x47\\x49\\x46\\x38\": \"gif\",\n",
" b\"\\x52\\x49\\x46\\x46\": \"webp\",\n",
" }\n",

View File

@@ -185,7 +185,7 @@
" )\n",
" # Text summary chain\n",
" model = VertexAI(\n",
" temperature=0, model_name=\"gemini-pro\", max_output_tokens=1024\n",
" temperature=0, model_name=\"gemini-pro\", max_tokens=1024\n",
" ).with_fallbacks([empty_response])\n",
" summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()\n",
"\n",
@@ -254,9 +254,9 @@
"\n",
"def image_summarize(img_base64, prompt):\n",
" \"\"\"Make image summary\"\"\"\n",
" model = ChatVertexAI(model_name=\"gemini-pro-vision\", max_output_tokens=1024)\n",
" model = ChatVertexAI(model=\"gemini-pro-vision\", max_tokens=1024)\n",
"\n",
" msg = model(\n",
" msg = model.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
@@ -462,8 +462,8 @@
" Check if the base64 data is an image by looking at the start of the data\n",
" \"\"\"\n",
" image_signatures = {\n",
" b\"\\xFF\\xD8\\xFF\": \"jpg\",\n",
" b\"\\x89\\x50\\x4E\\x47\\x0D\\x0A\\x1A\\x0A\": \"png\",\n",
" b\"\\xff\\xd8\\xff\": \"jpg\",\n",
" b\"\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a\": \"png\",\n",
" b\"\\x47\\x49\\x46\\x38\": \"gif\",\n",
" b\"\\x52\\x49\\x46\\x46\": \"webp\",\n",
" }\n",
@@ -553,9 +553,7 @@
" \"\"\"\n",
"\n",
" # Multi-modal LLM\n",
" model = ChatVertexAI(\n",
" temperature=0, model_name=\"gemini-pro-vision\", max_output_tokens=1024\n",
" )\n",
" model = ChatVertexAI(temperature=0, model_name=\"gemini-pro-vision\", max_tokens=1024)\n",
"\n",
" # RAG pipeline\n",
" chain = (\n",

View File

@@ -47,6 +47,7 @@ Notebook | Description
[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).
[program_aided_language_model.i...](https://github.com/langchain-ai/langchain/tree/master/cookbook/program_aided_language_model.ipynb) | Implement program-aided language models as described in the provided research paper.
[qa_citations.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/qa_citations.ipynb) | Different ways to get a model to cite its sources.
[rag_upstage_layout_analysis_groundedness_check.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_upstage_layout_analysis_groundedness_check.ipynb) | End-to-end RAG example using Upstage Layout Analysis and Groundedness Check.
[retrieval_in_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/retrieval_in_sql.ipynb) | Perform retrieval-augmented-generation (rag) on a PostgreSQL database using pgvector.
[sales_agent_with_context.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/sales_agent_with_context.ipynb) | Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings.
[self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.
@@ -56,3 +57,4 @@ Notebook | Description
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.

View File

@@ -75,7 +75,7 @@
"\n",
"Apply to the [`LLaMA2`](https://arxiv.org/pdf/2307.09288.pdf) paper. \n",
"\n",
"We use the Unstructured [`partition_pdf`](https://unstructured-io.github.io/unstructured/bricks/partition.html#partition-pdf), which segments a PDF document by using a layout model. \n",
"We use the Unstructured [`partition_pdf`](https://unstructured-io.github.io/unstructured/core/partition.html#partition-pdf), which segments a PDF document by using a layout model. \n",
"\n",
"This layout model makes it possible to extract elements, such as tables, from pdfs. \n",
"\n",

View File

@@ -532,8 +532,8 @@
"def is_image_data(b64data):\n",
" \"\"\"Check if the base64 data is an image by looking at the start of the data.\"\"\"\n",
" image_signatures = {\n",
" b\"\\xFF\\xD8\\xFF\": \"jpg\",\n",
" b\"\\x89\\x50\\x4E\\x47\\x0D\\x0A\\x1A\\x0A\": \"png\",\n",
" b\"\\xff\\xd8\\xff\": \"jpg\",\n",
" b\"\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a\": \"png\",\n",
" b\"\\x47\\x49\\x46\\x38\": \"gif\",\n",
" b\"\\x52\\x49\\x46\\x46\": \"webp\",\n",
" }\n",

View File

@@ -90,7 +90,7 @@
" ) -> AIMessage:\n",
" messages = self.update_messages(input_message)\n",
"\n",
" output_message = self.model(messages)\n",
" output_message = self.model.invoke(messages)\n",
" self.update_messages(output_message)\n",
"\n",
" return output_message"

557
cookbook/cql_agent.ipynb Normal file
View File

@@ -0,0 +1,557 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Python Modules"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install the following Python modules:\n",
"\n",
"```bash\n",
"pip install ipykernel python-dotenv cassio pandas langchain_openai langchain langchain-community langchainhub langchain_experimental openai-multi-tool-use-parallel-patch\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load the `.env` File"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Connection is via `cassio` using `auto=True` parameter, and the notebook uses OpenAI. You should create a `.env` file accordingly.\n",
"\n",
"For Casssandra, set:\n",
"```bash\n",
"CASSANDRA_CONTACT_POINTS\n",
"CASSANDRA_USERNAME\n",
"CASSANDRA_PASSWORD\n",
"CASSANDRA_KEYSPACE\n",
"```\n",
"\n",
"For Astra, set:\n",
"```bash\n",
"ASTRA_DB_APPLICATION_TOKEN\n",
"ASTRA_DB_DATABASE_ID\n",
"ASTRA_DB_KEYSPACE\n",
"```\n",
"\n",
"For example:\n",
"\n",
"```bash\n",
"# Connection to Astra:\n",
"ASTRA_DB_DATABASE_ID=a1b2c3d4-...\n",
"ASTRA_DB_APPLICATION_TOKEN=AstraCS:...\n",
"ASTRA_DB_KEYSPACE=notebooks\n",
"\n",
"# Also set \n",
"OPENAI_API_KEY=sk-....\n",
"```\n",
"\n",
"(You may also modify the below code to directly connect with `cassio`.)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv(override=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to Cassandra"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"import cassio\n",
"\n",
"cassio.init(auto=True)\n",
"session = cassio.config.resolve_session()\n",
"if not session:\n",
" raise Exception(\n",
" \"Check environment configuration or manually configure cassio connection parameters\"\n",
" )\n",
"\n",
"keyspace = os.environ.get(\n",
" \"ASTRA_DB_KEYSPACE\", os.environ.get(\"CASSANDRA_KEYSPACE\", None)\n",
")\n",
"if not keyspace:\n",
" raise ValueError(\"a KEYSPACE environment variable must be set\")\n",
"\n",
"session.set_keyspace(keyspace)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Database"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This needs to be done one time only!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The dataset used is from Kaggle, the [Environmental Sensor Telemetry Data](https://www.kaggle.com/datasets/garystafford/environmental-sensor-data-132k?select=iot_telemetry_data.csv). The next cell will download and unzip the data into a Pandas dataframe. The following cell is instructions to download manually. \n",
"\n",
"The net result of this section is you should have a Pandas dataframe variable `df`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download Automatically"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from io import BytesIO\n",
"from zipfile import ZipFile\n",
"\n",
"import pandas as pd\n",
"import requests\n",
"\n",
"datasetURL = \"https://storage.googleapis.com/kaggle-data-sets/788816/1355729/bundle/archive.zip?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=gcp-kaggle-com%40kaggle-161607.iam.gserviceaccount.com%2F20240404%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20240404T115828Z&X-Goog-Expires=259200&X-Goog-SignedHeaders=host&X-Goog-Signature=2849f003b100eb9dcda8dd8535990f51244292f67e4f5fad36f14aa67f2d4297672d8fe6ff5a39f03a29cda051e33e95d36daab5892b8874dcd5a60228df0361fa26bae491dd4371f02dd20306b583a44ba85a4474376188b1f84765147d3b4f05c57345e5de883c2c29653cce1f3755cd8e645c5e952f4fb1c8a735b22f0c811f97f7bce8d0235d0d3731ca8ab4629ff381f3bae9e35fc1b181c1e69a9c7913a5e42d9d52d53e5f716467205af9c8a3cc6746fc5352e8fbc47cd7d18543626bd67996d18c2045c1e475fc136df83df352fa747f1a3bb73e6ba3985840792ec1de407c15836640ec96db111b173bf16115037d53fdfbfd8ac44145d7f9a546aa\"\n",
"\n",
"response = requests.get(datasetURL)\n",
"if response.status_code == 200:\n",
" zip_file = ZipFile(BytesIO(response.content))\n",
" csv_file_name = zip_file.namelist()[0]\n",
"else:\n",
" print(\"Failed to download the file\")\n",
"\n",
"with zip_file.open(csv_file_name) as csv_file:\n",
" df = pd.read_csv(csv_file)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download Manually"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can download the `.zip` file and unpack the `.csv` contained within. Comment in the next line, and adjust the path to this `.csv` file appropriately."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# df = pd.read_csv(\"/path/to/iot_telemetry_data.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Data into Cassandra"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This section assumes the existence of a dataframe `df`, the following cell validates its structure. The Download section above creates this object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"assert df is not None, \"Dataframe 'df' must be set\"\n",
"expected_columns = [\n",
" \"ts\",\n",
" \"device\",\n",
" \"co\",\n",
" \"humidity\",\n",
" \"light\",\n",
" \"lpg\",\n",
" \"motion\",\n",
" \"smoke\",\n",
" \"temp\",\n",
"]\n",
"assert all(\n",
" [column in df.columns for column in expected_columns]\n",
"), \"DataFrame does not have the expected columns\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create and load tables:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import UTC, datetime\n",
"\n",
"from cassandra.query import BatchStatement\n",
"\n",
"# Create sensors table\n",
"table_query = \"\"\"\n",
"CREATE TABLE IF NOT EXISTS iot_sensors (\n",
" device text,\n",
" conditions text,\n",
" room text,\n",
" PRIMARY KEY (device)\n",
")\n",
"WITH COMMENT = 'Environmental IoT room sensor metadata.';\n",
"\"\"\"\n",
"session.execute(table_query)\n",
"\n",
"pstmt = session.prepare(\n",
" \"\"\"\n",
"INSERT INTO iot_sensors (device, conditions, room)\n",
"VALUES (?, ?, ?)\n",
"\"\"\"\n",
")\n",
"\n",
"devices = [\n",
" (\"00:0f:00:70:91:0a\", \"stable conditions, cooler and more humid\", \"room 1\"),\n",
" (\"1c:bf:ce:15:ec:4d\", \"highly variable temperature and humidity\", \"room 2\"),\n",
" (\"b8:27:eb:bf:9d:51\", \"stable conditions, warmer and dryer\", \"room 3\"),\n",
"]\n",
"\n",
"for device, conditions, room in devices:\n",
" session.execute(pstmt, (device, conditions, room))\n",
"\n",
"print(\"Sensors inserted successfully.\")\n",
"\n",
"# Create data table\n",
"table_query = \"\"\"\n",
"CREATE TABLE IF NOT EXISTS iot_data (\n",
" day text,\n",
" device text,\n",
" ts timestamp,\n",
" co double,\n",
" humidity double,\n",
" light boolean,\n",
" lpg double,\n",
" motion boolean,\n",
" smoke double,\n",
" temp double,\n",
" PRIMARY KEY ((day, device), ts)\n",
")\n",
"WITH COMMENT = 'Data from environmental IoT room sensors. Columns include device identifier, timestamp (ts) of the data collection, carbon monoxide level (co), relative humidity, light presence, LPG concentration, motion detection, smoke concentration, and temperature (temp). Data is partitioned by day and device.';\n",
"\"\"\"\n",
"session.execute(table_query)\n",
"\n",
"pstmt = session.prepare(\n",
" \"\"\"\n",
"INSERT INTO iot_data (day, device, ts, co, humidity, light, lpg, motion, smoke, temp)\n",
"VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\n",
"\"\"\"\n",
")\n",
"\n",
"\n",
"def insert_data_batch(name, group):\n",
" batch = BatchStatement()\n",
" day, device = name\n",
" print(f\"Inserting batch for day: {day}, device: {device}\")\n",
"\n",
" for _, row in group.iterrows():\n",
" timestamp = datetime.fromtimestamp(row[\"ts\"], UTC)\n",
" batch.add(\n",
" pstmt,\n",
" (\n",
" day,\n",
" row[\"device\"],\n",
" timestamp,\n",
" row[\"co\"],\n",
" row[\"humidity\"],\n",
" row[\"light\"],\n",
" row[\"lpg\"],\n",
" row[\"motion\"],\n",
" row[\"smoke\"],\n",
" row[\"temp\"],\n",
" ),\n",
" )\n",
"\n",
" session.execute(batch)\n",
"\n",
"\n",
"# Convert columns to appropriate types\n",
"df[\"light\"] = df[\"light\"] == \"true\"\n",
"df[\"motion\"] = df[\"motion\"] == \"true\"\n",
"df[\"ts\"] = df[\"ts\"].astype(float)\n",
"df[\"day\"] = df[\"ts\"].apply(\n",
" lambda x: datetime.fromtimestamp(x, UTC).strftime(\"%Y-%m-%d\")\n",
")\n",
"\n",
"grouped_df = df.groupby([\"day\", \"device\"])\n",
"\n",
"for name, group in grouped_df:\n",
" insert_data_batch(name, group)\n",
"\n",
"print(\"Data load complete\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(session.keyspace)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load the Tools"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Python `import` statements for the demo:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_openai_tools_agent\n",
"from langchain_community.agent_toolkits.cassandra_database.toolkit import (\n",
" CassandraDatabaseToolkit,\n",
")\n",
"from langchain_community.tools.cassandra_database.prompt import QUERY_PATH_PROMPT\n",
"from langchain_community.tools.cassandra_database.tool import (\n",
" GetSchemaCassandraDatabaseTool,\n",
" GetTableDataCassandraDatabaseTool,\n",
" QueryCassandraDatabaseTool,\n",
")\n",
"from langchain_community.utilities.cassandra_database import CassandraDatabase\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `CassandraDatabase` object is loaded from `cassio`, though it does accept a `Session`-type parameter as an alternative."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a CassandraDatabase instance\n",
"db = CassandraDatabase(include_tables=[\"iot_sensors\", \"iot_data\"])\n",
"\n",
"# Create the Cassandra Database tools\n",
"query_tool = QueryCassandraDatabaseTool(db=db)\n",
"schema_tool = GetSchemaCassandraDatabaseTool(db=db)\n",
"select_data_tool = GetTableDataCassandraDatabaseTool(db=db)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The tools can be invoked directly:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Test the tools\n",
"print(\"Executing a CQL query:\")\n",
"query = \"SELECT * FROM iot_sensors LIMIT 5;\"\n",
"result = query_tool.run({\"query\": query})\n",
"print(result)\n",
"\n",
"print(\"\\nGetting the schema for a keyspace:\")\n",
"schema = schema_tool.run({\"keyspace\": keyspace})\n",
"print(schema)\n",
"\n",
"print(\"\\nGetting data from a table:\")\n",
"table = \"iot_data\"\n",
"predicate = \"day = '2020-07-14' and device = 'b8:27:eb:bf:9d:51'\"\n",
"data = select_data_tool.run(\n",
" {\"keyspace\": keyspace, \"table\": table, \"predicate\": predicate, \"limit\": 5}\n",
")\n",
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Agent Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import Tool\n",
"from langchain_experimental.utilities import PythonREPL\n",
"\n",
"python_repl = PythonREPL()\n",
"\n",
"repl_tool = Tool(\n",
" name=\"python_repl\",\n",
" description=\"A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\",\n",
" func=python_repl.run,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-4-1106-preview\")\n",
"toolkit = CassandraDatabaseToolkit(db=db)\n",
"\n",
"# context = toolkit.get_context()\n",
"# tools = toolkit.get_tools()\n",
"tools = [schema_tool, select_data_tool, repl_tool]\n",
"\n",
"input = (\n",
" QUERY_PATH_PROMPT\n",
" + f\"\"\"\n",
"\n",
"Here is your task: In the {keyspace} keyspace, find the total number of times the temperature of each device has exceeded 23 degrees on July 14, 2020.\n",
" Create a summary report including the name of the room. Use Pandas if helpful.\n",
"\"\"\"\n",
")\n",
"\n",
"prompt = hub.pull(\"hwchase17/openai-tools-agent\")\n",
"\n",
"# messages = [\n",
"# HumanMessagePromptTemplate.from_template(input),\n",
"# AIMessage(content=QUERY_PATH_PROMPT),\n",
"# MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
"# ]\n",
"\n",
"# prompt = ChatPromptTemplate.from_messages(messages)\n",
"# print(prompt)\n",
"\n",
"# Choose the LLM that will drive the agent\n",
"# Only certain models support this\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-1106\", temperature=0)\n",
"\n",
"# Construct the OpenAI Tools agent\n",
"agent = create_openai_tools_agent(llm, tools, prompt)\n",
"\n",
"print(\"Available tools:\")\n",
"for tool in tools:\n",
" print(\"\\t\" + tool.name + \" - \" + tool.description + \" - \" + str(tool))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
"\n",
"response = agent_executor.invoke({\"input\": input})\n",
"\n",
"print(response[\"output\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -362,7 +362,7 @@
],
"source": [
"llm = OpenAI()\n",
"llm(query)"
"llm.invoke(query)"
]
},
{

View File

@@ -108,7 +108,7 @@
" return obs_message\n",
"\n",
" def _act(self):\n",
" act_message = self.model(self.message_history)\n",
" act_message = self.model.invoke(self.message_history)\n",
" self.message_history.append(act_message)\n",
" action = int(self.action_parser.parse(act_message.content)[\"action\"])\n",
" return action\n",

View File

@@ -74,7 +74,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",

View File

@@ -79,7 +79,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",
@@ -234,7 +234,7 @@
" termination_clause=self.termination_clause if self.stop else \"\",\n",
" )\n",
"\n",
" self.response = self.model(\n",
" self.response = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=response_prompt),\n",
@@ -263,7 +263,7 @@
" speaker_names=speaker_names,\n",
" )\n",
"\n",
" choice_string = self.model(\n",
" choice_string = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=choice_prompt),\n",
@@ -299,7 +299,7 @@
" ),\n",
" next_speaker=self.next_speaker,\n",
" )\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=next_prompt),\n",

View File

@@ -71,7 +71,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",
@@ -164,7 +164,7 @@
" message_history=\"\\n\".join(self.message_history),\n",
" recent_message=self.message_history[-1],\n",
" )\n",
" bid_string = self.model([SystemMessage(content=prompt)]).content\n",
" bid_string = self.model.invoke([SystemMessage(content=prompt)]).content\n",
" return bid_string"
]
},

View File

@@ -0,0 +1,872 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Oracle AI Vector Search with Document Processing\n",
"Oracle AI Vector Search is designed for Artificial Intelligence (AI) workloads that allows you to query data based on semantics, rather than keywords.\n",
"One of the biggest benefit of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data in one single system. This is not only powerful but also significantly more effective because you don't need to add a specialized vector database, eliminating the pain of data fragmentation between multiple systems.\n",
"\n",
"In addition, because Oracle has been building database technologies for so long, your vectors can benefit from all of Oracle Database's most powerful features, like the following:\n",
"\n",
" * Partitioning Support\n",
" * Real Application Clusters scalability\n",
" * Exadata smart scans\n",
" * Shard processing across geographically distributed databases\n",
" * Transactions\n",
" * Parallel SQL\n",
" * Disaster recovery\n",
" * Security\n",
" * Oracle Machine Learning\n",
" * Oracle Graph Database\n",
" * Oracle Spatial and Graph\n",
" * Oracle Blockchain\n",
" * JSON\n",
"\n",
"This guide demonstrates how Oracle AI Vector Search can be used with Langchain to serve an end-to-end RAG pipeline. This guide goes through examples of:\n",
"\n",
" * Loading the documents from various sources using OracleDocLoader\n",
" * Summarizing them within/outside the database using OracleSummary\n",
" * Generating embeddings for them within/outside the database using OracleEmbeddings\n",
" * Chunking them according to different requirements using Advanced Oracle Capabilities from OracleTextSplitter\n",
" * Storing and Indexing them in a Vector Store and querying them for queries in OracleVS"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prerequisites\n",
"\n",
"Please install Oracle Python Client driver to use Langchain with Oracle AI Vector Search. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# pip install oracledb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Demo User\n",
"First, create a demo user with all the required privileges. "
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n",
"User setup done!\n"
]
}
],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"\n",
"# please update with your username, password, hostname and service_name\n",
"# please make sure this user has sufficient privileges to perform all below\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"\n",
" cursor = conn.cursor()\n",
" cursor.execute(\n",
" \"\"\"\n",
" begin\n",
" -- drop user\n",
" begin\n",
" execute immediate 'drop user testuser cascade';\n",
" exception\n",
" when others then\n",
" dbms_output.put_line('Error setting up user.');\n",
" end;\n",
" execute immediate 'create user testuser identified by testuser';\n",
" execute immediate 'grant connect, unlimited tablespace, create credential, create procedure, create any index to testuser';\n",
" execute immediate 'create or replace directory DEMO_PY_DIR as ''/scratch/hroy/view_storage/hroy_devstorage/demo/orachain''';\n",
" execute immediate 'grant read, write on directory DEMO_PY_DIR to public';\n",
" execute immediate 'grant create mining model to testuser';\n",
"\n",
" -- network access\n",
" begin\n",
" DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(\n",
" host => '*',\n",
" ace => xs$ace_type(privilege_list => xs$name_list('connect'),\n",
" principal_name => 'testuser',\n",
" principal_type => xs_acl.ptype_db));\n",
" end;\n",
" end;\n",
" \"\"\"\n",
" )\n",
" print(\"User setup done!\")\n",
" cursor.close()\n",
" conn.close()\n",
"except Exception as e:\n",
" print(\"User setup failed!\")\n",
" cursor.close()\n",
" conn.close()\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Process Documents using Oracle AI\n",
"Let's think about a scenario that the users have some documents in Oracle Database or in a file system. They want to use the data for Oracle AI Vector Search using Langchain.\n",
"\n",
"For that, the users need to do some document preprocessing. The first step would be to read the documents, generate their summary(if needed) and then chunk/split them if needed. After that, they need to generate the embeddings for those chunks and store into Oracle AI Vector Store. Finally, the users will perform some semantic queries on those data. \n",
"\n",
"Oracle AI Vector Search Langchain library provides a range of document processing functionalities including document loading, splitting, generating summary and embeddings.\n",
"\n",
"In the following sections, we will go through how to use Oracle AI Langchain APIs to achieve each of these functionalities individually. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to Demo User\n",
"The following sample code will show how to connect to Oracle Database. "
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n"
]
}
],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"\n",
"# please update with your username, password, hostname and service_name\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"except Exception as e:\n",
" print(\"Connection failed!\")\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Populate a Demo Table\n",
"Create a demo table and insert some sample documents."
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Table created and populated.\n"
]
}
],
"source": [
"try:\n",
" cursor = conn.cursor()\n",
"\n",
" drop_table_sql = \"\"\"drop table demo_tab\"\"\"\n",
" cursor.execute(drop_table_sql)\n",
"\n",
" create_table_sql = \"\"\"create table demo_tab (id number, data clob)\"\"\"\n",
" cursor.execute(create_table_sql)\n",
"\n",
" insert_row_sql = \"\"\"insert into demo_tab values (:1, :2)\"\"\"\n",
" rows_to_insert = [\n",
" (\n",
" 1,\n",
" \"If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\",\n",
" ),\n",
" (\n",
" 2,\n",
" \"A tablespace can be online (accessible) or offline (not accessible) whenever the database is open.\\nA tablespace is usually online so that its data is available to users. The SYSTEM tablespace and temporary tablespaces cannot be taken offline.\",\n",
" ),\n",
" (\n",
" 3,\n",
" \"The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table.\\nSometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\",\n",
" ),\n",
" ]\n",
" cursor.executemany(insert_row_sql, rows_to_insert)\n",
"\n",
" conn.commit()\n",
"\n",
" print(\"Table created and populated.\")\n",
" cursor.close()\n",
"except Exception as e:\n",
" print(\"Table creation failed.\")\n",
" cursor.close()\n",
" conn.close()\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"Now that we have a demo user and a demo table with some data, we just need to do one more setup. For embedding and summary, we have a few provider options that the users can choose from such as database, 3rd party providers like ocigenai, huggingface, openai, etc. If the users choose to use 3rd party provider, they need to create a credential with corresponding authentication information. On the other hand, if the users choose to use 'database' as provider, they need to load an onnx model to Oracle Database for embeddings; however, for summary, they don't need to do anything."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load ONNX Model\n",
"\n",
"To generate embeddings, Oracle provides a few provider options for users to choose from. The users can choose 'database' provider or some 3rd party providers like OCIGENAI, HuggingFace, etc.\n",
"\n",
"***Note*** If the users choose database option, they need to load an ONNX model to Oracle Database. The users do not need to load an ONNX model to Oracle Database if they choose to use 3rd party provider to generate embeddings.\n",
"\n",
"One of the core benefits of using an ONNX model is that the users do not need to transfer their data to 3rd party to generate embeddings. And also, since it does not involve any network or REST API calls, it may provide better performance.\n",
"\n",
"Here is the sample code to load an ONNX model to Oracle Database:"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ONNX model loaded.\n"
]
}
],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"\n",
"# please update with your related information\n",
"# make sure that you have onnx file in the system\n",
"onnx_dir = \"DEMO_PY_DIR\"\n",
"onnx_file = \"tinybert.onnx\"\n",
"model_name = \"demo_model\"\n",
"\n",
"try:\n",
" OracleEmbeddings.load_onnx_model(conn, onnx_dir, onnx_file, model_name)\n",
" print(\"ONNX model loaded.\")\n",
"except Exception as e:\n",
" print(\"ONNX model loading failed!\")\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Credential\n",
"\n",
"On the other hand, if the users choose to use 3rd party provider to generate embeddings and summary, they need to create credential to access 3rd party provider's end points.\n",
"\n",
"***Note:*** The users do not need to create any credential if they choose to use 'database' provider to generate embeddings and summary. Should the users choose to 3rd party provider, they need to create credential for the 3rd party provider they want to use. \n",
"\n",
"Here is a sample example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"try:\n",
" cursor = conn.cursor()\n",
" cursor.execute(\n",
" \"\"\"\n",
" declare\n",
" jo json_object_t;\n",
" begin\n",
" -- HuggingFace\n",
" dbms_vector_chain.drop_credential(credential_name => 'HF_CRED');\n",
" jo := json_object_t();\n",
" jo.put('access_token', '<access_token>');\n",
" dbms_vector_chain.create_credential(\n",
" credential_name => 'HF_CRED',\n",
" params => json(jo.to_string));\n",
"\n",
" -- OCIGENAI\n",
" dbms_vector_chain.drop_credential(credential_name => 'OCI_CRED');\n",
" jo := json_object_t();\n",
" jo.put('user_ocid','<user_ocid>');\n",
" jo.put('tenancy_ocid','<tenancy_ocid>');\n",
" jo.put('compartment_ocid','<compartment_ocid>');\n",
" jo.put('private_key','<private_key>');\n",
" jo.put('fingerprint','<fingerprint>');\n",
" dbms_vector_chain.create_credential(\n",
" credential_name => 'OCI_CRED',\n",
" params => json(jo.to_string));\n",
" end;\n",
" \"\"\"\n",
" )\n",
" cursor.close()\n",
" print(\"Credentials created.\")\n",
"except Exception as ex:\n",
" cursor.close()\n",
" raise"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Documents\n",
"The users can load the documents from Oracle Database or a file system or both. They just need to set the loader parameters accordingly. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters.\n",
"\n",
"The main benefit of using OracleDocLoader is that it can handle 150+ different file formats. You don't need to use different types of loader for different file formats. Here is the list formats that we support: [Oracle Text Supported Document Formats](https://docs.oracle.com/en/database/oracle/oracle-database/23/ccref/oracle-text-supported-document-formats.html)\n",
"\n",
"The following sample code will show how to do that:"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of docs loaded: 3\n"
]
}
],
"source": [
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_core.documents import Document\n",
"\n",
"# loading from Oracle Database table\n",
"# make sure you have the table with this specification\n",
"loader_params = {}\n",
"loader_params = {\n",
" \"owner\": \"testuser\",\n",
" \"tablename\": \"demo_tab\",\n",
" \"colname\": \"data\",\n",
"}\n",
"\n",
"\"\"\" load the docs \"\"\"\n",
"loader = OracleDocLoader(conn=conn, params=loader_params)\n",
"docs = loader.load()\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of docs loaded: {len(docs)}\")\n",
"# print(f\"Document-0: {docs[0].page_content}\") # content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generate Summary\n",
"Now that the user loaded the documents, they may want to generate a summary for each document. The Oracle AI Vector Search Langchain library provides an API to do that. There are a few summary generation provider options including Database, OCIGENAI, HuggingFace and so on. The users can choose their preferred provider to generate a summary. Like before, they just need to set the summary parameters accordingly. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** The users may need to set proxy if they want to use some 3rd party summary generation providers other than Oracle's in-house and default provider: 'database'. If you don't have proxy, please remove the proxy parameter when you instantiate the OracleSummary."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"# proxy to be used when we instantiate summary and embedder object\n",
"proxy = \"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following sample code will show how to generate summary:"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of Summaries: 3\n"
]
}
],
"source": [
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_core.documents import Document\n",
"\n",
"# using 'database' provider\n",
"summary_params = {\n",
" \"provider\": \"database\",\n",
" \"glevel\": \"S\",\n",
" \"numParagraphs\": 1,\n",
" \"language\": \"english\",\n",
"}\n",
"\n",
"# get the summary instance\n",
"# Remove proxy if not required\n",
"summ = OracleSummary(conn=conn, params=summary_params, proxy=proxy)\n",
"\n",
"list_summary = []\n",
"for doc in docs:\n",
" summary = summ.get_summary(doc.page_content)\n",
" list_summary.append(summary)\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of Summaries: {len(list_summary)}\")\n",
"# print(f\"Summary-0: {list_summary[0]}\") #content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Split Documents\n",
"The documents can be in different sizes: small, medium, large, or very large. The users like to split/chunk their documents into smaller pieces to generate embeddings. There are lots of different splitting customizations the users can do. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters.\n",
"\n",
"The following sample code will show how to do that:"
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of Chunks: 3\n"
]
}
],
"source": [
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_core.documents import Document\n",
"\n",
"# split by default parameters\n",
"splitter_params = {\"normalize\": \"all\"}\n",
"\n",
"\"\"\" get the splitter instance \"\"\"\n",
"splitter = OracleTextSplitter(conn=conn, params=splitter_params)\n",
"\n",
"list_chunks = []\n",
"for doc in docs:\n",
" chunks = splitter.split_text(doc.page_content)\n",
" list_chunks.extend(chunks)\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of Chunks: {len(list_chunks)}\")\n",
"# print(f\"Chunk-0: {list_chunks[0]}\") # content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generate Embeddings\n",
"Now that the documents are chunked as per requirements, the users may want to generate embeddings for these chunks. Oracle AI Vector Search provides a number of ways to generate embeddings. The users can load an ONNX embedding model to Oracle Database and use it to generate embeddings or use some 3rd party API's end points to generate embeddings. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** The users may need to set proxy if they want to use some 3rd party embedding generation providers other than 'database' provider (aka using ONNX model)."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"# proxy to be used when we instantiate summary and embedder object\n",
"proxy = \"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following sample code will show how to generate embeddings:"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of embeddings: 3\n"
]
}
],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_core.documents import Document\n",
"\n",
"# using ONNX model loaded to Oracle Database\n",
"embedder_params = {\"provider\": \"database\", \"model\": \"demo_model\"}\n",
"\n",
"# get the embedding instance\n",
"# Remove proxy if not required\n",
"embedder = OracleEmbeddings(conn=conn, params=embedder_params, proxy=proxy)\n",
"\n",
"embeddings = []\n",
"for doc in docs:\n",
" chunks = splitter.split_text(doc.page_content)\n",
" for chunk in chunks:\n",
" embed = embedder.embed_query(chunk)\n",
" embeddings.append(embed)\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of embeddings: {len(embeddings)}\")\n",
"# print(f\"Embedding-0: {embeddings[0]}\") # content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Oracle AI Vector Store\n",
"Now that you know how to use Oracle AI Langchain library APIs individually to process the documents, let us show how to integrate with Oracle AI Vector Store to facilitate the semantic searches."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let's import all the dependencies."
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"from langchain_community.document_loaders.oracleai import (\n",
" OracleDocLoader,\n",
" OracleTextSplitter,\n",
")\n",
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_community.vectorstores import oraclevs\n",
"from langchain_community.vectorstores.oraclevs import OracleVS\n",
"from langchain_community.vectorstores.utils import DistanceStrategy\n",
"from langchain_core.documents import Document"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, let's combine all document processing stages together. Here is the sample code below:"
]
},
{
"cell_type": "code",
"execution_count": 53,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n",
"ONNX model loaded.\n",
"Number of total chunks with metadata: 3\n"
]
}
],
"source": [
"\"\"\"\n",
"In this sample example, we will use 'database' provider for both summary and embeddings.\n",
"So, we don't need to do the followings:\n",
" - set proxy for 3rd party providers\n",
" - create credential for 3rd party providers\n",
"\n",
"If you choose to use 3rd party provider, \n",
"please follow the necessary steps for proxy and credential.\n",
"\"\"\"\n",
"\n",
"# oracle connection\n",
"# please update with your username, password, hostname, and service_name\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"except Exception as e:\n",
" print(\"Connection failed!\")\n",
" sys.exit(1)\n",
"\n",
"\n",
"# load onnx model\n",
"# please update with your related information\n",
"onnx_dir = \"DEMO_PY_DIR\"\n",
"onnx_file = \"tinybert.onnx\"\n",
"model_name = \"demo_model\"\n",
"try:\n",
" OracleEmbeddings.load_onnx_model(conn, onnx_dir, onnx_file, model_name)\n",
" print(\"ONNX model loaded.\")\n",
"except Exception as e:\n",
" print(\"ONNX model loading failed!\")\n",
" sys.exit(1)\n",
"\n",
"\n",
"# params\n",
"# please update necessary fields with related information\n",
"loader_params = {\n",
" \"owner\": \"testuser\",\n",
" \"tablename\": \"demo_tab\",\n",
" \"colname\": \"data\",\n",
"}\n",
"summary_params = {\n",
" \"provider\": \"database\",\n",
" \"glevel\": \"S\",\n",
" \"numParagraphs\": 1,\n",
" \"language\": \"english\",\n",
"}\n",
"splitter_params = {\"normalize\": \"all\"}\n",
"embedder_params = {\"provider\": \"database\", \"model\": \"demo_model\"}\n",
"\n",
"# instantiate loader, summary, splitter, and embedder\n",
"loader = OracleDocLoader(conn=conn, params=loader_params)\n",
"summary = OracleSummary(conn=conn, params=summary_params)\n",
"splitter = OracleTextSplitter(conn=conn, params=splitter_params)\n",
"embedder = OracleEmbeddings(conn=conn, params=embedder_params)\n",
"\n",
"# process the documents\n",
"chunks_with_mdata = []\n",
"for id, doc in enumerate(docs, start=1):\n",
" summ = summary.get_summary(doc.page_content)\n",
" chunks = splitter.split_text(doc.page_content)\n",
" for ic, chunk in enumerate(chunks, start=1):\n",
" chunk_metadata = doc.metadata.copy()\n",
" chunk_metadata[\"id\"] = chunk_metadata[\"_oid\"] + \"$\" + str(id) + \"$\" + str(ic)\n",
" chunk_metadata[\"document_id\"] = str(id)\n",
" chunk_metadata[\"document_summary\"] = str(summ[0])\n",
" chunks_with_mdata.append(\n",
" Document(page_content=str(chunk), metadata=chunk_metadata)\n",
" )\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of total chunks with metadata: {len(chunks_with_mdata)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point, we have processed the documents and generated chunks with metadata. Next, we will create Oracle AI Vector Store with those chunks.\n",
"\n",
"Here is the sample code how to do that:"
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Vector Store Table: oravs\n"
]
}
],
"source": [
"# create Oracle AI Vector Store\n",
"vectorstore = OracleVS.from_documents(\n",
" chunks_with_mdata,\n",
" embedder,\n",
" client=conn,\n",
" table_name=\"oravs\",\n",
" distance_strategy=DistanceStrategy.DOT_PRODUCT,\n",
")\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Vector Store Table: {vectorstore.table_name}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above example creates a vector store with DOT_PRODUCT distance strategy. \n",
"\n",
"However, the users can create Oracle AI Vector Store provides different distance strategies. Please see the [comprehensive guide](/docs/integrations/vectorstores/oracle) for more information."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have embeddings stored in vector stores, let's create an index on them to get better semantic search performance during query time.\n",
"\n",
"***Note*** If you are getting some insufficient memory error, please increase ***vector_memory_size*** in your database.\n",
"\n",
"Here is the sample code to create an index:"
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
"oraclevs.create_index(\n",
" conn, vectorstore, params={\"idx_name\": \"hnsw_oravs\", \"idx_type\": \"HNSW\"}\n",
")\n",
"\n",
"print(\"Index created.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above example creates a default HNSW index on the embeddings stored in 'oravs' table. The users can set different parameters as per their requirements. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters.\n",
"\n",
"Also, there are different types of vector indices that the users can create. Please see the [comprehensive guide](/docs/integrations/vectorstores/oracle) for more information.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Perform Semantic Search\n",
"All set!\n",
"\n",
"We have processed the documents, stored them to vector store, and then created index to get better query performance. Now let's do some semantic searches.\n",
"\n",
"Here is the sample code for this:"
]
},
{
"cell_type": "code",
"execution_count": 58,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table. Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.', metadata={'_oid': '662f2f257677f3c2311a8ff999fd34e5', '_rowid': 'AAAR/xAAEAAAAAnAAC', 'id': '662f2f257677f3c2311a8ff999fd34e5$3$1', 'document_id': '3', 'document_summary': 'Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\\n\\n'})]\n",
"[]\n",
"[(Document(page_content='The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table. Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.', metadata={'_oid': '662f2f257677f3c2311a8ff999fd34e5', '_rowid': 'AAAR/xAAEAAAAAnAAC', 'id': '662f2f257677f3c2311a8ff999fd34e5$3$1', 'document_id': '3', 'document_summary': 'Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\\n\\n'}), 0.055675752460956573)]\n",
"[]\n",
"[Document(page_content='If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.', metadata={'_oid': '662f2f253acf96b33b430b88699490a2', '_rowid': 'AAAR/xAAEAAAAAnAAA', 'id': '662f2f253acf96b33b430b88699490a2$1$1', 'document_id': '1', 'document_summary': 'If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\\n\\n'})]\n",
"[Document(page_content='If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.', metadata={'_oid': '662f2f253acf96b33b430b88699490a2', '_rowid': 'AAAR/xAAEAAAAAnAAA', 'id': '662f2f253acf96b33b430b88699490a2$1$1', 'document_id': '1', 'document_summary': 'If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\\n\\n'})]\n"
]
}
],
"source": [
"query = \"What is Oracle AI Vector Store?\"\n",
"filter = {\"document_id\": [\"1\"]}\n",
"\n",
"# Similarity search without a filter\n",
"print(vectorstore.similarity_search(query, 1))\n",
"\n",
"# Similarity search with a filter\n",
"print(vectorstore.similarity_search(query, 1, filter=filter))\n",
"\n",
"# Similarity search with relevance score\n",
"print(vectorstore.similarity_search_with_score(query, 1))\n",
"\n",
"# Similarity search with relevance score with filter\n",
"print(vectorstore.similarity_search_with_score(query, 1, filter=filter))\n",
"\n",
"# Max marginal relevance search\n",
"print(vectorstore.max_marginal_relevance_search(query, 1, fetch_k=20, lambda_mult=0.5))\n",
"\n",
"# Max marginal relevance search with filter\n",
"print(\n",
" vectorstore.max_marginal_relevance_search(\n",
" query, 1, fetch_k=20, lambda_mult=0.5, filter=filter\n",
" )\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -129,7 +129,7 @@
" return obs_message\n",
"\n",
" def _act(self):\n",
" act_message = self.model(self.message_history)\n",
" act_message = self.model.invoke(self.message_history)\n",
" self.message_history.append(act_message)\n",
" action = int(self.action_parser.parse(act_message.content)[\"action\"])\n",
" return action\n",

View File

@@ -0,0 +1,80 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RAG using Upstage Layout Analysis and Groundedness Check\n",
"This example illustrates RAG using [Upstage](https://python.langchain.com/docs/integrations/providers/upstage/) Layout Analysis and Groundedness Check."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain_community.vectorstores import DocArrayInMemorySearch\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_core.runnables.base import RunnableSerializable\n",
"from langchain_upstage import (\n",
" ChatUpstage,\n",
" UpstageEmbeddings,\n",
" UpstageGroundednessCheck,\n",
" UpstageLayoutAnalysisLoader,\n",
")\n",
"\n",
"model = ChatUpstage()\n",
"\n",
"files = [\"/PATH/TO/YOUR/FILE.pdf\", \"/PATH/TO/YOUR/FILE2.pdf\"]\n",
"\n",
"loader = UpstageLayoutAnalysisLoader(file_path=files, split=\"element\")\n",
"\n",
"docs = loader.load()\n",
"\n",
"vectorstore = DocArrayInMemorySearch.from_documents(docs, embedding=UpstageEmbeddings())\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"output_parser = StrOutputParser()\n",
"\n",
"retrieved_docs = retriever.get_relevant_documents(\"How many parameters in SOLAR model?\")\n",
"\n",
"groundedness_check = UpstageGroundednessCheck()\n",
"groundedness = \"\"\n",
"while groundedness != \"grounded\":\n",
" chain: RunnableSerializable = RunnablePassthrough() | prompt | model | output_parser\n",
"\n",
" result = chain.invoke(\n",
" {\n",
" \"context\": retrieved_docs,\n",
" \"question\": \"How many parameters in SOLAR model?\",\n",
" }\n",
" )\n",
"\n",
" groundedness = groundedness_check.invoke(\n",
" {\n",
" \"context\": retrieved_docs,\n",
" \"answer\": result,\n",
" }\n",
" )"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -355,15 +355,15 @@
"metadata": {},
"outputs": [],
"source": [
"attribute_info[-2][\n",
" \"description\"\n",
"] += f\". Valid values are {sorted(latest_price['starrating'].value_counts().index.tolist())}\"\n",
"attribute_info[3][\n",
" \"description\"\n",
"] += f\". Valid values are {sorted(latest_price['maxoccupancy'].value_counts().index.tolist())}\"\n",
"attribute_info[-3][\n",
" \"description\"\n",
"] += f\". Valid values are {sorted(latest_price['country'].value_counts().index.tolist())}\""
"attribute_info[-2][\"description\"] += (\n",
" f\". Valid values are {sorted(latest_price['starrating'].value_counts().index.tolist())}\"\n",
")\n",
"attribute_info[3][\"description\"] += (\n",
" f\". Valid values are {sorted(latest_price['maxoccupancy'].value_counts().index.tolist())}\"\n",
")\n",
"attribute_info[-3][\"description\"] += (\n",
" f\". Valid values are {sorted(latest_price['country'].value_counts().index.tolist())}\"\n",
")"
]
},
{
@@ -688,9 +688,9 @@
"metadata": {},
"outputs": [],
"source": [
"attribute_info[-3][\n",
" \"description\"\n",
"] += \". NOTE: Only use the 'eq' operator if a specific country is mentioned. If a region is mentioned, include all relevant countries in filter.\"\n",
"attribute_info[-3][\"description\"] += (\n",
" \". NOTE: Only use the 'eq' operator if a specific country is mentioned. If a region is mentioned, include all relevant countries in filter.\"\n",
")\n",
"chain = load_query_constructor_runnable(\n",
" ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0),\n",
" doc_contents,\n",

View File

@@ -84,7 +84,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",

View File

@@ -70,7 +70,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",

1
docs/.gitignore vendored
View File

@@ -1,2 +1,3 @@
/.quarto/
src/supabase.d.ts
build

View File

@@ -1,24 +0,0 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail
set -o xtrace
SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)"
cd "${SCRIPT_DIR}"
mkdir -p ../_dist
rsync -ruv --exclude node_modules --exclude api_reference --exclude .venv --exclude .docusaurus . ../_dist
cd ../_dist
poetry run python scripts/model_feat_table.py
cp ../cookbook/README.md src/pages/cookbook.mdx
mkdir -p docs/templates
cp ../templates/docs/INDEX.md docs/templates/index.md
poetry run python scripts/copy_templates.py
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
wget -q https://raw.githubusercontent.com/langchain-ai/langgraph/main/README.md -O docs/langgraph.md
yarn
poetry run quarto preview docs

84
docs/Makefile Normal file
View File

@@ -0,0 +1,84 @@
# we build the docs in these stages:
# 1. install vercel and python dependencies
# 2. copy files from "source dir" to "intermediate dir"
# 2. generate files like model feat table, etc in "intermediate dir"
# 3. copy files to their right spots (e.g. langserve readme) in "intermediate dir"
# 4. build the docs from "intermediate dir" to "output dir"
SOURCE_DIR = docs/
INTERMEDIATE_DIR = build/intermediate/docs
OUTPUT_NEW_DIR = build/output-new
OUTPUT_NEW_DOCS_DIR = $(OUTPUT_NEW_DIR)/docs
PYTHON = .venv/bin/python
PARTNER_DEPS_LIST := $(shell ls -1 ../libs/partners | grep -vE "airbyte|ibm" | xargs -I {} echo "../libs/partners/{}" | tr '\n' ' ')
PORT ?= 3001
clean:
rm -rf build
install-vercel-deps:
yum -y update
yum install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
install-py-deps:
python3 -m venv .venv
$(PYTHON) -m pip install --upgrade pip
$(PYTHON) -m pip install --upgrade uv
$(PYTHON) -m uv pip install -r vercel_requirements.txt
$(PYTHON) -m uv pip install --editable $(PARTNER_DEPS_LIST)
generate-files:
mkdir -p $(INTERMEDIATE_DIR)
cp -r $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
mkdir -p $(INTERMEDIATE_DIR)/templates
cp ../templates/docs/INDEX.md $(INTERMEDIATE_DIR)/templates/index.md
cp ../cookbook/README.md $(INTERMEDIATE_DIR)/cookbook.mdx
$(PYTHON) scripts/model_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/copy_templates.py $(INTERMEDIATE_DIR)
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O $(INTERMEDIATE_DIR)/langserve.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/
wget -q https://raw.githubusercontent.com/langchain-ai/langgraph/main/README.md -O $(INTERMEDIATE_DIR)/langgraph.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langgraph.md https://github.com/langchain-ai/langgraph/tree/main/
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(INTERMEDIATE_DIR)
copy-infra:
mkdir -p $(OUTPUT_NEW_DIR)
cp -r src $(OUTPUT_NEW_DIR)
cp vercel.json $(OUTPUT_NEW_DIR)
cp babel.config.js $(OUTPUT_NEW_DIR)
cp -r data $(OUTPUT_NEW_DIR)
cp docusaurus.config.js $(OUTPUT_NEW_DIR)
cp package.json $(OUTPUT_NEW_DIR)
cp sidebars.js $(OUTPUT_NEW_DIR)
cp -r static $(OUTPUT_NEW_DIR)
cp yarn.lock $(OUTPUT_NEW_DIR)
render:
$(PYTHON) scripts/notebook_convert.py $(INTERMEDIATE_DIR) $(OUTPUT_NEW_DOCS_DIR)
md-sync:
rsync -avm --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
build: install-py-deps generate-files copy-infra render md-sync
vercel-build: install-vercel-deps build
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
rm -rf build
yarn run docusaurus build
mv build v0.1
mkdir build
mv v0.1 build
mv build/v0.1/404.html build
start:
cd $(OUTPUT_NEW_DIR) && yarn && yarn start --port=$(PORT)

File diff suppressed because one or more lines are too long

View File

@@ -1,76 +0,0 @@
/* eslint-disable prefer-template */
/* eslint-disable no-param-reassign */
// eslint-disable-next-line import/no-extraneous-dependencies
const babel = require("@babel/core");
const path = require("path");
const fs = require("fs");
/**
*
* @param {string|Buffer} content Content of the resource file
* @param {object} [map] SourceMap data consumable by https://github.com/mozilla/source-map
* @param {any} [meta] Meta data, could be anything
*/
async function webpackLoader(content, map, meta) {
const cb = this.async();
if (!this.resourcePath.endsWith(".ts")) {
cb(null, JSON.stringify({ content, imports: [] }), map, meta);
return;
}
try {
const result = await babel.parseAsync(content, {
sourceType: "module",
filename: this.resourcePath,
});
const imports = [];
result.program.body.forEach((node) => {
if (node.type === "ImportDeclaration") {
const source = node.source.value;
if (!source.startsWith("langchain")) {
return;
}
node.specifiers.forEach((specifier) => {
if (specifier.type === "ImportSpecifier") {
const local = specifier.local.name;
const imported = specifier.imported.name;
imports.push({ local, imported, source });
} else {
throw new Error("Unsupported import type");
}
});
}
});
imports.forEach((imp) => {
const { imported, source } = imp;
const moduleName = source.split("/").slice(1).join("_");
const docsPath = path.resolve(__dirname, "docs", "api", moduleName);
const available = fs.readdirSync(docsPath, { withFileTypes: true });
const found = available.find(
(dirent) =>
dirent.isDirectory() &&
fs.existsSync(path.resolve(docsPath, dirent.name, imported + ".md"))
);
if (found) {
imp.docs =
"/" + path.join("docs", "api", moduleName, found.name, imported);
} else {
throw new Error(
`Could not find docs for ${source}.${imported} in docs/api/`
);
}
});
cb(null, JSON.stringify({ content, imports }), map, meta);
} catch (err) {
cb(err);
}
}
module.exports = webpackLoader;

File diff suppressed because it is too large Load Diff

View File

@@ -7,9 +7,9 @@
"source": [
"# Create a runnable with the @chain decorator\n",
"\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping in a [`RunnableLambda`](/docs/expression_language/primitives/functions).\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionally equivalent to wrapping in a [`RunnableLambda`](/docs/expression_language/primitives/functions).\n",
"\n",
"This will have the benefit of improved observability by tracing your chain correctly. Any calls to runnables inside this function will be traced as nested childen.\n",
"This will have the benefit of improved observability by tracing your chain correctly. Any calls to runnables inside this function will be traced as nested children.\n",
"\n",
"It will also allow you to use this as any other runnable, compose it in chain, etc.\n",
"\n",

View File

@@ -11,7 +11,7 @@ LCEL was designed from day 1 to **support putting prototypes in production, with
When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
[**Async support**](/docs/expression_language/interface)
Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/docs/langsmith) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/docs/langserve) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
[**Optimized parallel execution**](/docs/expression_language/primitives/parallel)
Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.

View File

@@ -3,16 +3,18 @@
{
"cell_type": "raw",
"id": "bc346658-6820-413a-bd8f-11bd3082fe43",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 0.5\n",
"title: Advantages of LCEL\n",
"---\n",
"\n",
"```{=mdx}\n",
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
"```"
"import { ColumnContainer, Column } from \"@theme/Columns\";"
]
},
{
@@ -20,6 +22,7 @@
"id": "919a5ae2-ed21-4923-b98f-723c111bac67",
"metadata": {},
"source": [
"\n",
":::{.callout-tip} \n",
"We recommend reading the LCEL [Get started](/docs/expression_language/get_started) section first.\n",
":::"
@@ -56,13 +59,10 @@
"## Invoke\n",
"In the simplest case, we just want to pass in a topic string and get back a joke string:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"```\n",
"\n",
"#### Without LCEL\n"
]
},
@@ -102,11 +102,9 @@
"metadata": {},
"source": [
"\n",
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@@ -146,18 +144,15 @@
"metadata": {},
"source": [
"\n",
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Stream\n",
"If we want to stream results instead, we'll need to change our function:\n",
"\n",
"```{=mdx}\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@@ -198,11 +193,10 @@
"id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
]
@@ -223,19 +217,18 @@
"id": "b9b41e78-ddeb-44d0-a58b-a0ea0c99a761",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"\n",
"## Batch\n",
"\n",
"If we want to run on a batch of inputs in parallel, we'll again need a new function:\n",
"\n",
"```{=mdx}\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"\n",
"#### Without LCEL\n",
"\n"
@@ -263,11 +256,11 @@
"id": "9b3e9d34-6775-43c1-93d8-684b58e341ab",
"metadata": {},
"source": [
"```{=mdx}\n",
"\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
]
@@ -287,18 +280,14 @@
"id": "cc5ba36f-eec1-4fc1-8cfe-fa242a7f7809",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"## Async\n",
"\n",
"If we need an asynchronous version:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@@ -334,11 +323,9 @@
"id": "2f209290-498c-4c17-839e-ee9002919846",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@@ -359,10 +346,9 @@
"id": "1f282129-99a3-40f4-b67f-2d0718b1bea9",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Async Batch\n",
"\n",
"```{=mdx}\n",

View File

@@ -13,12 +13,13 @@ LangChain simplifies every stage of the LLM application lifecycle:
- **Deployment**: Turn any chain into an API with [LangServe](/docs/langserve).
import ThemedImage from '@theme/ThemedImage';
import useBaseUrl from '@docusaurus/useBaseUrl';
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: '/svg/langchain_stack.svg',
dark: '/svg/langchain_stack_dark.svg',
light: useBaseUrl('/svg/langchain_stack.svg'),
dark: useBaseUrl('/svg/langchain_stack_dark.svg'),
}}
title="LangChain Framework Overview"
/>

View File

@@ -9,7 +9,7 @@
"\n",
"This notebook shows how to prevent prompt injection attacks using the text classification model from `HuggingFace`.\n",
"\n",
"By default, it uses a *[laiyer/deberta-v3-base-prompt-injection](https://huggingface.co/laiyer/deberta-v3-base-prompt-injection)* model trained to identify prompt injections. \n",
"By default, it uses a *[protectai/deberta-v3-base-prompt-injection-v2](https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2)* model trained to identify prompt injections. \n",
"\n",
"In this notebook, we will use the ONNX version of the model to speed up the inference. "
]
@@ -49,11 +49,15 @@
"from optimum.onnxruntime import ORTModelForSequenceClassification\n",
"from transformers import AutoTokenizer, pipeline\n",
"\n",
"# Using https://huggingface.co/laiyer/deberta-v3-base-prompt-injection\n",
"model_path = \"laiyer/deberta-v3-base-prompt-injection\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model_path)\n",
"tokenizer.model_input_names = [\"input_ids\", \"attention_mask\"] # Hack to run the model\n",
"model = ORTModelForSequenceClassification.from_pretrained(model_path, subfolder=\"onnx\")\n",
"# Using https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2\n",
"model_path = \"laiyer/deberta-v3-base-prompt-injection-v2\"\n",
"revision = None # We recommend specifiying the revision to avoid breaking changes or supply chain attacks\n",
"tokenizer = AutoTokenizer.from_pretrained(\n",
" model_path, revision=revision, model_input_names=[\"input_ids\", \"attention_mask\"]\n",
")\n",
"model = ORTModelForSequenceClassification.from_pretrained(\n",
" model_path, revision=revision, subfolder=\"onnx\"\n",
")\n",
"\n",
"classifier = pipeline(\n",
" \"text-classification\",\n",

View File

@@ -194,7 +194,7 @@
"llm = OpenAI(\n",
" temperature=0, callbacks=[LabelStudioCallbackHandler(project_name=\"My Project\")]\n",
")\n",
"print(llm(\"Tell me a joke\"))"
"print(llm.invoke(\"Tell me a joke\"))"
]
},
{
@@ -270,7 +270,7 @@
" )\n",
" ]\n",
")\n",
"llm_results = chat_llm(\n",
"llm_results = chat_llm.invoke(\n",
" [\n",
" SystemMessage(content=\"Always use a lot of emojis\"),\n",
" HumanMessage(content=\"Tell me a joke\"),\n",

View File

@@ -107,7 +107,7 @@ User tracking allows you to identify your users, track their cost, conversations
from langchain_community.callbacks.llmonitor_callback import LLMonitorCallbackHandler, identify
with identify("user-123"):
llm("Tell me a joke")
llm.invoke("Tell me a joke")
with identify("user-456", user_props={"email": "user456@test.com"}):
agen.run("Who is Leo DiCaprio's girlfriend?")

View File

@@ -103,7 +103,7 @@
" temperature=0,\n",
" callbacks=[PromptLayerCallbackHandler(pl_tags=[\"chatopenai\"])],\n",
")\n",
"llm_results = chat_llm(\n",
"llm_results = chat_llm.invoke(\n",
" [\n",
" HumanMessage(content=\"What comes after 1,2,3 ?\"),\n",
" HumanMessage(content=\"Tell me another joke?\"),\n",
@@ -129,10 +129,11 @@
"from langchain_community.llms import GPT4All\n",
"\n",
"model = GPT4All(model=\"./models/gpt4all-model.bin\", n_ctx=512, n_threads=8)\n",
"callbacks = [PromptLayerCallbackHandler(pl_tags=[\"langchain\", \"gpt4all\"])]\n",
"\n",
"response = model(\n",
"response = model.invoke(\n",
" \"Once upon a time, \",\n",
" callbacks=[PromptLayerCallbackHandler(pl_tags=[\"langchain\", \"gpt4all\"])],\n",
" config={\"callbacks\": callbacks},\n",
")"
]
},
@@ -181,7 +182,7 @@
")\n",
"\n",
"example_prompt = promptlayer.prompts.get(\"example\", version=1, langchain=True)\n",
"openai_llm(example_prompt.format(product=\"toasters\"))"
"openai_llm.invoke(example_prompt.format(product=\"toasters\"))"
]
},
{

View File

@@ -315,7 +315,7 @@
}
],
"source": [
"chat_res = chat_llm(\n",
"chat_res = chat_llm.invoke(\n",
" [\n",
" SystemMessage(content=\"Every answer of yours must be about OpenAI.\"),\n",
" HumanMessage(content=\"Tell me a joke\"),\n",

View File

@@ -72,7 +72,7 @@
"metadata": {},
"outputs": [],
"source": [
"output = chat([HumanMessage(content=\"write a funny joke\")])\n",
"output = chat.invoke([HumanMessage(content=\"write a funny joke\")])\n",
"print(\"output:\", output)"
]
},
@@ -90,7 +90,7 @@
"outputs": [],
"source": [
"kwargs = {\"temperature\": 0.8, \"top_p\": 0.8, \"top_k\": 5}\n",
"output = chat([HumanMessage(content=\"write a funny joke\")], **kwargs)\n",
"output = chat.invoke([HumanMessage(content=\"write a funny joke\")], **kwargs)\n",
"print(\"output:\", output)"
]
},

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,181 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Coze Chat\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Chat with Coze Bot\n",
"\n",
"ChatCoze chat models API by coze.com. For more information, see [https://www.coze.com/open/docs/chat](https://www.coze.com/open/docs/chat)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-25T15:14:24.186131Z",
"start_time": "2024-04-25T15:14:23.831767Z"
}
},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatCoze\n",
"from langchain_core.messages import HumanMessage"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-25T15:14:24.191123Z",
"start_time": "2024-04-25T15:14:24.186330Z"
}
},
"outputs": [],
"source": [
"chat = ChatCoze(\n",
" coze_api_base=\"YOUR_API_BASE\",\n",
" coze_api_key=\"YOUR_API_KEY\",\n",
" bot_id=\"YOUR_BOT_ID\",\n",
" user=\"YOUR_USER_ID\",\n",
" conversation_id=\"YOUR_CONVERSATION_ID\",\n",
" streaming=False,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alternatively, you can set your API key and API base with:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"COZE_API_KEY\"] = \"YOUR_API_KEY\"\n",
"os.environ[\"COZE_API_BASE\"] = \"YOUR_API_BASE\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-25T15:14:25.853218Z",
"start_time": "2024-04-25T15:14:24.192408Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='为你找到关于coze的信息如下\n\nCoze是一个由字节跳动推出的AI聊天机器人和应用程序编辑开发平台。\n\n用户无论是否有编程经验都可以通过该平台快速创建各种类型的聊天机器人、智能体、AI应用和插件并将其部署在社交平台和即时聊天应用程序中。\n\n国际版使用的模型比国内版更强大。')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat([HumanMessage(content=\"什么是扣子(coze)\")])"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"## Chat with Coze Streaming"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-25T15:14:25.870044Z",
"start_time": "2024-04-25T15:14:25.863381Z"
},
"collapsed": false
},
"outputs": [],
"source": [
"chat = ChatCoze(\n",
" coze_api_base=\"YOUR_API_BASE\",\n",
" coze_api_key=\"YOUR_API_KEY\",\n",
" bot_id=\"YOUR_BOT_ID\",\n",
" user=\"YOUR_USER_ID\",\n",
" conversation_id=\"YOUR_CONVERSATION_ID\",\n",
" streaming=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-25T15:14:27.153546Z",
"start_time": "2024-04-25T15:14:25.868470Z"
},
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessageChunk(content='为你查询到Coze是一个由字节跳动推出的AI聊天机器人和应用程序编辑开发平台。')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat([HumanMessage(content=\"什么是扣子(coze)\")])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -8,13 +8,8 @@
"source": [
"# DeepInfra\n",
"\n",
"[DeepInfra](https://deepinfra.com/?utm_source=langchain) is a serverless inference as a service that provides access to a [variety of LLMs](https://deepinfra.com/models?utm_source=langchain) and [embeddings models](https://deepinfra.com/models?type=embeddings&utm_source=langchain). This notebook goes over how to use LangChain with DeepInfra for chat models."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[DeepInfra](https://deepinfra.com/?utm_source=langchain) is a serverless inference as a service that provides access to a [variety of LLMs](https://deepinfra.com/models?utm_source=langchain) and [embeddings models](https://deepinfra.com/models?type=embeddings&utm_source=langchain). This notebook goes over how to use LangChain with DeepInfra for chat models.\n",
"\n",
"## Set the Environment API Key\n",
"Make sure to get your API key from DeepInfra. You have to [Login](https://deepinfra.com/login?from=%2Fdash) and get a new token.\n",
"\n",
@@ -24,92 +19,34 @@
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"# get a new token: https://deepinfra.com/login?from=%2Fdash\n",
"\n",
"from getpass import getpass\n",
"\n",
"DEEPINFRA_API_TOKEN = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"# or pass deepinfra_api_token parameter to the ChatDeepInfra constructor\n",
"os.environ[\"DEEPINFRA_API_TOKEN\"] = DEEPINFRA_API_TOKEN"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# get a new token: https://deepinfra.com/login?from=%2Fdash\n",
"\n",
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_community.chat_models import ChatDeepInfra\n",
"from langchain_core.messages import HumanMessage"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat = ChatDeepInfra(model=\"meta-llama/Llama-2-7b-chat-hf\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"DEEPINFRA_API_TOKEN = getpass()\n",
"\n",
"# or pass deepinfra_api_token parameter to the ChatDeepInfra constructor\n",
"os.environ[\"DEEPINFRA_API_TOKEN\"] = DEEPINFRA_API_TOKEN\n",
"\n",
"chat = ChatDeepInfra(model=\"meta-llama/Llama-2-7b-chat-hf\")\n",
"\n",
"messages = [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
"]\n",
"chat(messages)"
"chat.invoke(messages)"
]
},
{
@@ -123,7 +60,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
"metadata": {
"tags": []
@@ -135,69 +72,32 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[ChatGeneration(text=\" J'aime programmer.\", generation_info=None, message=AIMessage(content=\" J'aime programmer.\", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"await chat.agenerate([messages])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" J'aime la programmation."
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"chat = ChatDeepInfra(\n",
" streaming=True,\n",
" verbose=True,\n",
" callbacks=[StreamingStdOutCallbackHandler()],\n",
")\n",
"chat(messages)"
"chat.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c253883f",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -114,7 +114,7 @@
"human = \"Translate this sentence from English to French. I love programming.\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chat = ChatVertexAI(model_name=\"gemini-pro\", convert_system_message_to_human=True)\n",
"chat = ChatVertexAI(model=\"gemini-pro\", convert_system_message_to_human=True)\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({})"
@@ -233,9 +233,7 @@
}
],
"source": [
"chat = ChatVertexAI(\n",
" model_name=\"codechat-bison\", max_output_tokens=1000, temperature=0.5\n",
")\n",
"chat = ChatVertexAI(model=\"codechat-bison\", max_tokens=1000, temperature=0.5)\n",
"\n",
"message = chat.invoke(\"Write a Python function generating all prime numbers\")\n",
"print(message.content)"
@@ -399,7 +397,7 @@
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm = ChatVertexAI(model_name=\"gemini-pro\", temperature=0)\n",
"llm = ChatVertexAI(model=\"gemini-pro\", temperature=0)\n",
"llm_with_tools = llm.bind_tools([GetWeather])\n",
"ai_msg = llm_with_tools.invoke(\n",
" \"what is the weather like in San Francisco\",\n",
@@ -551,7 +549,7 @@
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chat = ChatVertexAI(model_name=\"chat-bison\", max_output_tokens=1000, temperature=0.5)\n",
"chat = ChatVertexAI(model=\"chat-bison\", max_tokens=1000, temperature=0.5)\n",
"chain = prompt | chat\n",
"\n",
"asyncio.run(\n",

View File

@@ -123,7 +123,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [
{
@@ -172,7 +172,7 @@
" <td>F</td>\n",
" <td>59836 Carla Causeway Suite 939\\nPort Eugene, I...</td>\n",
" <td>meltondenise@yahoo.com</td>\n",
" <td>1997-09-09</td>\n",
" <td>1997-11-23</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
@@ -181,7 +181,7 @@
" <td>M</td>\n",
" <td>3108 Christina Forges\\nPort Timothychester, KY...</td>\n",
" <td>erica80@hotmail.com</td>\n",
" <td>1924-05-05</td>\n",
" <td>1924-07-19</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
@@ -190,7 +190,7 @@
" <td>F</td>\n",
" <td>Unit 7405 Box 3052\\nDPO AE 09858</td>\n",
" <td>timothypotts@gmail.com</td>\n",
" <td>1933-09-06</td>\n",
" <td>1933-11-20</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
@@ -199,7 +199,7 @@
" <td>F</td>\n",
" <td>6408 Christopher Hill Apt. 459\\nNew Benjamin, ...</td>\n",
" <td>dadams@gmail.com</td>\n",
" <td>1988-07-28</td>\n",
" <td>1988-10-11</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
@@ -208,7 +208,7 @@
" <td>M</td>\n",
" <td>2241 Bell Gardens Suite 723\\nScottside, CA 38463</td>\n",
" <td>williamayala@gmail.com</td>\n",
" <td>1930-12-19</td>\n",
" <td>1931-03-04</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
@@ -233,14 +233,14 @@
"\n",
" birthdate \n",
"id \n",
"0 1997-09-09 \n",
"1 1924-05-05 \n",
"2 1933-09-06 \n",
"3 1988-07-28 \n",
"4 1930-12-19 "
"0 1997-11-23 \n",
"1 1924-07-19 \n",
"2 1933-11-20 \n",
"3 1988-10-11 \n",
"4 1931-03-04 "
]
},
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
@@ -646,7 +646,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.18"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -62,7 +62,7 @@
"messages = [system_message, user_message]\n",
"\n",
"# chat with wasm-chat service\n",
"response = chat(messages)\n",
"response = chat.invoke(messages)\n",
"\n",
"print(f\"[Bot] {response.content}\")"
]

View File

@@ -33,7 +33,7 @@
"metadata": {},
"outputs": [],
"source": [
"!pip install langchain langchain-core langchain-community"
"!pip install langchain langchain-core langchain-community httpx"
]
},
{
@@ -89,6 +89,58 @@
"print(response) # should answer something like \"1. Max\\n2. Bella\\n3. Charlie\\n4. Rocky\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Stream Generation\n",
"\n",
"For tasks involving the generation of long text, such as creating an extensive article or translating a large document, it can be advantageous to receive the response in parts, as the text is generated, instead of waiting for the complete text. This makes the application more responsive and efficient, especially when the generated text is extensive. We offer two approaches to meet this need: one synchronous and another asynchronous.\n",
"\n",
"#### Synchronous:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"messages = [HumanMessage(content=\"Suggest 3 names for my dog\")]\n",
"\n",
"for chunk in llm.stream(messages):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Asynchronous:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"\n",
"async def async_invoke_chain(animal: str):\n",
" messages = [HumanMessage(content=f\"Suggest 3 names for my {animal}\")]\n",
" async for chunk in llm._astream(messages):\n",
" print(chunk.message.content, end=\"\", flush=True)\n",
"\n",
"\n",
"await async_invoke_chain(\"dog\")"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -12,7 +12,7 @@
"The `ChatNVIDIA` class is a LangChain chat model that connects to [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/).\n",
"\n",
"\n",
"> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA NGC catalog](https://catalog.ngc.nvidia.com/ai-foundation-models), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.\n",
"> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA API catalog](https://build.nvidia.com/), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.\n",
"> \n",
"> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).\n",
"> \n",
@@ -58,13 +58,13 @@
"\n",
"**To get started:**\n",
"\n",
"1. Create a free account with the [NVIDIA NGC](https://catalog.ngc.nvidia.com/) service, which hosts AI solution catalogs, containers, models, etc.\n",
"1. Create a free account with [NVIDIA](https://build.nvidia.com/), which hosts NVIDIA AI Foundation models\n",
"\n",
"2. Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`.\n",
"2. Click on your model of choice\n",
"\n",
"3. Select the `API` option and click `Generate Key`.\n",
"3. Under `Input` select the `Python` tab, and click `Get API Key`. Then click `Generate Key`.\n",
"\n",
"4. Save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints."
"4. Copy and save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints."
]
},
{
@@ -311,7 +311,7 @@
"\n",
"Some model types support unique prompting techniques and chat messages. We will review a few important ones below.\n",
"\n",
"**To find out more about a specific model, please navigate to the API section of an AI Foundation model [as linked here](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/codellama-13b/api).**"
"**To find out more about a specific model, please navigate to the API section of an AI Foundation model [as linked here](https://build.nvidia.com/).**"
]
},
{

View File

@@ -17,7 +17,7 @@
"\n",
"This notebook shows how to use an experimental wrapper around Ollama that gives it the same API as OpenAI Functions.\n",
"\n",
"Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use Mistral.\n",
"Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use llama3 and phi3 models.\n",
"For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).\n",
"\n",
"## Setup\n",
@@ -32,12 +32,18 @@
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-28T00:53:25.276543Z",
"start_time": "2024-04-28T00:53:24.881202Z"
},
"scrolled": true
},
"outputs": [],
"source": [
"from langchain_experimental.llms.ollama_functions import OllamaFunctions\n",
"\n",
"model = OllamaFunctions(model=\"mistral\")"
"model = OllamaFunctions(model=\"llama3\", format=\"json\")"
]
},
{
@@ -50,11 +56,16 @@
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-26T04:59:17.270931Z",
"start_time": "2024-04-26T04:59:17.263347Z"
}
},
"outputs": [],
"source": [
"model = model.bind(\n",
" functions=[\n",
"model = model.bind_tools(\n",
" tools=[\n",
" {\n",
" \"name\": \"get_current_weather\",\n",
" \"description\": \"Get the current weather in a given location\",\n",
@@ -88,12 +99,17 @@
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-26T04:59:26.092428Z",
"start_time": "2024-04-26T04:59:17.272627Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{\"location\": \"Boston, MA\", \"unit\": \"celsius\"}'}})"
"AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{\"location\": \"Boston, MA\"}'}}, id='run-1791f9fe-95ad-4ca4-bdf7-9f73eab31e6f-0')"
]
},
"execution_count": 3,
@@ -111,54 +127,119 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using for extraction\n",
"## Structured Output\n",
"\n",
"One useful thing you can do with function calling here is extracting properties from a given input in a structured format:"
"One useful thing you can do with function calling using `with_structured_output()` function is extracting properties from a given input in a structured format:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-26T04:59:26.098828Z",
"start_time": "2024-04-26T04:59:26.094021Z"
}
},
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"# Schema for structured response\n",
"class Person(BaseModel):\n",
" name: str = Field(description=\"The person's name\", required=True)\n",
" height: float = Field(description=\"The person's height\", required=True)\n",
" hair_color: str = Field(description=\"The person's hair color\")\n",
"\n",
"\n",
"# Prompt template\n",
"prompt = PromptTemplate.from_template(\n",
" \"\"\"Alex is 5 feet tall. \n",
"Claudia is 1 feet taller than Alex and jumps higher than him. \n",
"Claudia is a brunette and Alex is blonde.\n",
"\n",
"Human: {question}\n",
"AI: \"\"\"\n",
")\n",
"\n",
"# Chain\n",
"llm = OllamaFunctions(model=\"phi3\", format=\"json\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Person)\n",
"chain = prompt | structured_llm"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extracting data about Alex"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-26T04:59:30.164955Z",
"start_time": "2024-04-26T04:59:26.099790Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'},\n",
" {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]"
"Person(name='Alex', height=5.0, hair_color='blonde')"
]
},
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import create_extraction_chain\n",
"\n",
"# Schema\n",
"schema = {\n",
" \"properties\": {\n",
" \"name\": {\"type\": \"string\"},\n",
" \"height\": {\"type\": \"integer\"},\n",
" \"hair_color\": {\"type\": \"string\"},\n",
" },\n",
" \"required\": [\"name\", \"height\"],\n",
"}\n",
"\n",
"# Input\n",
"input = \"\"\"Alex is 5 feet tall. Claudia is 1 feet taller than Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\"\"\"\n",
"\n",
"# Run chain\n",
"llm = OllamaFunctions(model=\"mistral\", temperature=0)\n",
"chain = create_extraction_chain(schema, llm)\n",
"chain.run(input)"
"alex = chain.invoke(\"Describe Alex\")\n",
"alex"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extracting data about Claudia"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-26T04:59:31.509846Z",
"start_time": "2024-04-26T04:59:30.165662Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"Person(name='Claudia', height=6.0, hair_color='brunette')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"claudia = chain.invoke(\"Describe Claudia\")\n",
"claudia"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -172,9 +253,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -0,0 +1,119 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2970dd75-8ebf-4b51-8282-9b454b8f356d",
"metadata": {},
"source": [
"# Together AI\n",
"\n",
"[Together AI](https://www.together.ai/) offers an API to query [50+ leading open-source models](https://docs.together.ai/docs/inference-models) in a couple lines of code.\n",
"\n",
"This example goes over how to use LangChain to interact with Together AI models."
]
},
{
"cell_type": "markdown",
"id": "1c47fc36",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ecdb29d",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade langchain-together"
]
},
{
"cell_type": "markdown",
"id": "89883202",
"metadata": {},
"source": [
"## Environment\n",
"\n",
"To use Together AI, you'll need an API key which you can find here:\n",
"https://api.together.ai/settings/api-keys. This can be passed in as an init param\n",
"``together_api_key`` or set as environment variable ``TOGETHER_API_KEY``.\n"
]
},
{
"cell_type": "markdown",
"id": "8304b4d9",
"metadata": {},
"source": [
"## Example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "637bb53f",
"metadata": {},
"outputs": [],
"source": [
"# Querying chat models with Together AI\n",
"\n",
"from langchain_together import ChatTogether\n",
"\n",
"# choose from our 50+ models here: https://docs.together.ai/docs/inference-models\n",
"chat = ChatTogether(\n",
" # together_api_key=\"YOUR_API_KEY\",\n",
" model=\"meta-llama/Llama-3-70b-chat-hf\",\n",
")\n",
"\n",
"# stream the response back from the model\n",
"for m in chat.stream(\"Tell me fun things to do in NYC\"):\n",
" print(m.content, end=\"\", flush=True)\n",
"\n",
"# if you don't want to do streaming, you can use the invoke method\n",
"# chat.invoke(\"Tell me fun things to do in NYC\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e7b7170d-d7c5-4890-9714-a37238343805",
"metadata": {},
"outputs": [],
"source": [
"# Querying code and language models with Together AI\n",
"\n",
"from langchain_together import Together\n",
"\n",
"llm = Together(\n",
" model=\"codellama/CodeLlama-70b-Python-hf\",\n",
" # together_api_key=\"...\"\n",
")\n",
"\n",
"print(llm.invoke(\"def bubble_sort(): \"))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -142,11 +142,70 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## Tool Calling\n",
"ChatTongyi supports tool calling API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'name': 'get_current_weather', 'arguments': '{\"location\": \"San Francisco\"}'}, 'id': '', 'type': 'function'}]}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'tool_calls', 'request_id': 'dae79197-8780-9b7e-8c15-6a83e2a53534', 'token_usage': {'input_tokens': 229, 'output_tokens': 19, 'total_tokens': 248}}, id='run-9e06f837-582b-473b-bb1f-5e99a68ecc10-0', tool_calls=[{'name': 'get_current_weather', 'args': {'location': 'San Francisco'}, 'id': ''}])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.chat_models.tongyi import ChatTongyi\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"tools = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_current_time\",\n",
" \"description\": \"当你想知道现在的时间时非常有用。\",\n",
" \"parameters\": {},\n",
" },\n",
" },\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_current_weather\",\n",
" \"description\": \"当你想查询指定城市的天气时非常有用。\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"location\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"城市或县区,比如北京市、杭州市、余杭区等。\",\n",
" }\n",
" },\n",
" },\n",
" \"required\": [\"location\"],\n",
" },\n",
" },\n",
"]\n",
"\n",
"messages = [\n",
" SystemMessage(content=\"You are a helpful assistant.\"),\n",
" HumanMessage(content=\"What is the weather like in San Francisco?\"),\n",
"]\n",
"chatLLM = ChatTongyi()\n",
"llm_kwargs = {\"tools\": tools, \"result_format\": \"message\"}\n",
"ai_message = chatLLM.bind(**llm_kwargs).invoke(messages)\n",
"ai_message"
]
}
],
"metadata": {

View File

@@ -119,7 +119,7 @@
"metadata": {},
"outputs": [],
"source": [
"response = chat(messages)\n",
"response = chat.invoke(messages)\n",
"print(response.content) # Displays the AI-generated poem"
]
},

View File

@@ -216,11 +216,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",

View File

@@ -116,11 +116,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",

View File

@@ -87,11 +87,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",

View File

@@ -136,11 +136,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",

View File

@@ -209,11 +209,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",

View File

@@ -126,11 +126,11 @@
"source": [
"from typing import List\n",
"\n",
"from langchain_community.chat_loaders.base import ChatSession\n",
"from langchain_community.chat_loaders.utils import (\n",
" map_ai_messages,\n",
" merge_chat_runs,\n",
")\n",
"from langchain_core.chat_sessions import ChatSession\n",
"\n",
"raw_messages = loader.lazy_load()\n",
"# Merge consecutive messages from the same sender into a single message\n",

View File

@@ -0,0 +1,122 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Browserbase\n",
"\n",
"[Browserbase](https://browserbase.com) is a serverless platform for running headless browsers, it offers advanced debugging, session recordings, stealth mode, integrated proxies and captcha solving.\n",
"\n",
"## Installation\n",
"\n",
"- Get an API key from [browserbase.com](https://browserbase.com) and set it in environment variables (`BROWSERBASE_API_KEY`).\n",
"- Install the [Browserbase SDK](http://github.com/browserbase/python-sdk):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"% pip install browserbase"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading documents"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can load webpages into LangChain using `BrowserbaseLoader`. Optionally, you can set `text_content` parameter to convert the pages to text-only representation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import BrowserbaseLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loader = BrowserbaseLoader(\n",
" urls=[\n",
" \"https://example.com\",\n",
" ],\n",
" # Text mode\n",
" text_content=False,\n",
")\n",
"\n",
"docs = loader.load()\n",
"print(docs[0].page_content[:61])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading images\n",
"\n",
"You can also load screenshots of webpages (as bytes) for multi-modal models.\n",
"\n",
"Full example using GPT-4V:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from browserbase import Browserbase\n",
"from browserbase.helpers.gpt4 import GPT4VImage, GPT4VImageDetail\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"chat = ChatOpenAI(model=\"gpt-4-vision-preview\", max_tokens=256)\n",
"browser = Browserbase()\n",
"\n",
"screenshot = browser.screenshot(\"https://browserbase.com\")\n",
"\n",
"result = chat.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"What color is the logo?\"},\n",
" GPT4VImage(screenshot, GPT4VImageDetail.auto),\n",
" ]\n",
" )\n",
" ]\n",
")\n",
"\n",
"print(result.content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.9.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -20,7 +20,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-cloud-bigquery"
"%pip install --upgrade --quiet langchain-google-community[bigquery]"
]
},
{
@@ -31,7 +31,7 @@
},
"outputs": [],
"source": [
"from langchain_community.document_loaders import BigQueryLoader"
"from langchain_google_community import BigQueryLoader"
]
},
{

View File

@@ -21,7 +21,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-cloud-storage"
"%pip install --upgrade --quiet langchain-google-community[gcs]"
]
},
{
@@ -31,7 +31,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import GCSDirectoryLoader"
"from langchain_google_community import GCSDirectoryLoader"
]
},
{

View File

@@ -21,7 +21,7 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-cloud-storage"
"%pip install --upgrade --quiet langchain-google-community[gcs]"
]
},
{
@@ -31,7 +31,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import GCSFileLoader"
"from langchain_google_community import GCSFileLoader"
]
},
{

View File

@@ -38,7 +38,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-api-python-client google-auth-httplib2 google-auth-oauthlib"
"%pip install --upgrade --quiet langchain-google-community[drive]"
]
},
{
@@ -50,7 +50,7 @@
},
"outputs": [],
"source": [
"from langchain_community.document_loaders import GoogleDriveLoader"
"from langchain_google_community import GoogleDriveLoader"
]
},
{
@@ -121,10 +121,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import (\n",
" GoogleDriveLoader,\n",
" UnstructuredFileIOLoader,\n",
")"
"from langchain_community.document_loaders import UnstructuredFileIOLoader\n",
"from langchain_google_community import GoogleDriveLoader"
]
},
{
@@ -219,7 +217,7 @@
"metadata": {},
"source": [
"## Extended usage\n",
"An external component can manage the complexity of Google Drive : `langchain-googledrive`\n",
"An external (unofficial) component can manage the complexity of Google Drive : `langchain-googledrive`\n",
"It's compatible with the ̀`langchain_community.document_loaders.GoogleDriveLoader` and can be used\n",
"in its place.\n",
"\n",
@@ -339,7 +337,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import GoogleDriveLoader\n",
"from langchain_google_community import GoogleDriveLoader\n",
"\n",
"loader = GoogleDriveLoader(\n",
" folder_id=folder_id,\n",
@@ -368,6 +366,54 @@
"doc[0].metadata"
]
},
{
"cell_type": "markdown",
"id": "5ae0a525",
"metadata": {},
"source": [
"### Loading extended metadata\n",
"Following extra fields can also be fetched within metadata of each Document:\n",
" - full_path - Full path of the file/s in google drive.\n",
" - owner - owner of the file/s.\n",
" - size - size of the file/s."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6c0db38c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_community import GoogleDriveLoader\n",
"\n",
"loader = GoogleDriveLoader(\n",
" folder_id=folder_id,\n",
" load_extended_matadata=True,\n",
" # Optional: configure whether to load extended metadata for each Document.\n",
")\n",
"\n",
"doc = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "826d88a7",
"metadata": {},
"source": [
"You can pass load_extended_matadata=True, to add Google Drive document extended details to metadata."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fdaf04e4",
"metadata": {},
"outputs": [],
"source": [
"doc[0].metadata"
]
},
{
"cell_type": "markdown",
"id": "cd13d7d1-db7a-498d-ac98-76ccd9ad9019",

View File

@@ -32,7 +32,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-cloud-speech"
"%pip install --upgrade --quiet langchain-google-community[speech]"
]
},
{
@@ -52,7 +52,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import GoogleSpeechToTextLoader\n",
"from langchain_google_community import GoogleSpeechToTextLoader\n",
"\n",
"project_id = \"<PROJECT_ID>\"\n",
"file_path = \"gs://cloud-samples-data/speech/audio.flac\"\n",
@@ -152,7 +152,7 @@
" RecognitionConfig,\n",
" RecognitionFeatures,\n",
")\n",
"from langchain_community.document_loaders import GoogleSpeechToTextLoader\n",
"from langchain_google_community import GoogleSpeechToTextLoader\n",
"\n",
"project_id = \"<PROJECT_ID>\"\n",
"location = \"global\"\n",

View File

@@ -0,0 +1,125 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Kinetica\n",
"\n",
"This notebooks goes over how to load documents from Kinetica"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install gpudb==7.2.0.1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.kinetica_loader import KineticaLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"## Loading Environment Variables\n",
"import os\n",
"\n",
"from dotenv import load_dotenv\n",
"from langchain_community.vectorstores import (\n",
" KineticaSettings,\n",
")\n",
"\n",
"load_dotenv()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Kinetica needs the connection to the database.\n",
"# This is how to set it up.\n",
"HOST = os.getenv(\"KINETICA_HOST\", \"http://127.0.0.1:9191\")\n",
"USERNAME = os.getenv(\"KINETICA_USERNAME\", \"\")\n",
"PASSWORD = os.getenv(\"KINETICA_PASSWORD\", \"\")\n",
"\n",
"\n",
"def create_config() -> KineticaSettings:\n",
" return KineticaSettings(host=HOST, username=USERNAME, password=PASSWORD)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.kinetica_loader import KineticaLoader\n",
"\n",
"# The following `QUERY` is an example which will not run; this\n",
"# needs to be substituted with a valid `QUERY` that will return\n",
"# data and the `SCHEMA.TABLE` combination must exist in Kinetica.\n",
"\n",
"QUERY = \"select text, survey_id from SCHEMA.TABLE limit 10\"\n",
"kinetica_loader = KineticaLoader(\n",
" QUERY,\n",
" HOST,\n",
" USERNAME,\n",
" PASSWORD,\n",
")\n",
"kinetica_documents = kinetica_loader.load()\n",
"print(kinetica_documents)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.kinetica_loader import KineticaLoader\n",
"\n",
"# The following `QUERY` is an example which will not run; this\n",
"# needs to be substituted with a valid `QUERY` that will return\n",
"# data and the `SCHEMA.TABLE` combination must exist in Kinetica.\n",
"\n",
"QUERY = \"select text, survey_id as source from SCHEMA.TABLE limit 10\"\n",
"snowflake_loader = KineticaLoader(\n",
" query=QUERY,\n",
" host=HOST,\n",
" username=USERNAME,\n",
" password=PASSWORD,\n",
" metadata_columns=[\"source\"],\n",
")\n",
"kinetica_documents = snowflake_loader.load()\n",
"print(kinetica_documents)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -30,13 +30,24 @@
"source": [
"from getpass import getpass\n",
"\n",
"from langchain_community.document_loaders.larksuite import LarkSuiteDocLoader\n",
"from langchain_community.document_loaders.larksuite import (\n",
" LarkSuiteDocLoader,\n",
" LarkSuiteWikiLoader,\n",
")\n",
"\n",
"DOMAIN = input(\"larksuite domain\")\n",
"ACCESS_TOKEN = getpass(\"larksuite tenant_access_token or user_access_token\")\n",
"DOCUMENT_ID = input(\"larksuite document id\")"
]
},
{
"cell_type": "markdown",
"id": "4b6b9a66",
"metadata": {},
"source": [
"## Load From Document"
]
},
{
"cell_type": "code",
"execution_count": 3,
@@ -65,6 +76,38 @@
"pprint(docs)"
]
},
{
"cell_type": "markdown",
"id": "86f4a714",
"metadata": {},
"source": [
"## Load From Wiki"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "7332dfb9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='Test doc\\nThis is a test wiki doc.\\n', metadata={'document_id': 'TxOKdtMWaoSTDLxYS4ZcdEI7nwc', 'revision_id': 15, 'title': 'Test doc'})]\n"
]
}
],
"source": [
"from pprint import pprint\n",
"\n",
"DOCUMENT_ID = input(\"larksuite wiki id\")\n",
"larksuite_loader = LarkSuiteWikiLoader(DOMAIN, ACCESS_TOKEN, DOCUMENT_ID)\n",
"docs = larksuite_loader.load()\n",
"\n",
"pprint(docs)"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -0,0 +1,130 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "vm8vn9t8DvC_"
},
"source": [
"# Near Blockchain"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5WjXERXzFEhg"
},
"source": [
"## Overview"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "juAmbgoWD17u"
},
"source": [
"The intention of this notebook is to provide a means of testing functionality in the Langchain Document Loader for Near Blockchain.\n",
"\n",
"Initially this Loader supports:\n",
"\n",
"* Loading NFTs as Documents from NFT Smart Contracts (NEP-171 and NEP-177)\n",
"* Near Mainnnet, Near Testnet (default is mainnet)\n",
"* Mintbase's Graph API\n",
"\n",
"It can be extended if the community finds value in this loader. Specifically:\n",
"\n",
"* Additional APIs can be added (e.g. Tranction-related APIs)\n",
"\n",
"This Document Loader Requires:\n",
"\n",
"* A free [Mintbase API Key](https://docs.mintbase.xyz/dev/mintbase-graph/)\n",
"\n",
"The output takes the following format:\n",
"\n",
"- pageContent= Individual NFT\n",
"- metadata={'source': 'nft.yearofchef.near', 'blockchain': 'mainnet', 'tokenId': '1846'}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load NFTs into Document Loader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get MINTBASE_API_KEY from https://docs.mintbase.xyz/dev/mintbase-graph/\n",
"\n",
"mintbaseApiKey = \"...\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Option 1: Ethereum Mainnet (default BlockchainType)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "J3LWHARC-Kn0"
},
"outputs": [],
"source": [
"from MintbaseLoader import MintbaseDocumentLoader\n",
"\n",
"contractAddress = \"nft.yearofchef.near\" # Year of chef contract address\n",
"\n",
"\n",
"blockchainLoader = MintbaseDocumentLoader(\n",
" contract_address=contractAddress, blockchain_type=\"mainnet\", api_key=\"omni-site\"\n",
")\n",
"\n",
"nfts = blockchainLoader.load()\n",
"\n",
"print(nfts[:1])\n",
"\n",
"for doc in blockchainLoader.lazy_load():\n",
" print()\n",
" print(type(doc))\n",
" print(doc)"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [
"5WjXERXzFEhg"
],
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,236 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Oracle AI Vector Search: Document Processing\n",
"Oracle AI Vector Search is designed for Artificial Intelligence (AI) workloads that allows you to query data based on semantics, rather than keywords. One of the biggest benefit of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data in one single system. This is not only powerful but also significantly more effective because you don't need to add a specialized vector database, eliminating the pain of data fragmentation between multiple systems.\n",
"\n",
"The guide demonstrates how to use Document Processing Capabilities within Oracle AI Vector Search to load and chunk documents using OracleDocLoader and OracleTextSplitter respectively."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prerequisites\n",
"\n",
"Please install Oracle Python Client driver to use Langchain with Oracle AI Vector Search. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# pip install oracledb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to Oracle Database\n",
"The following sample code will show how to connect to Oracle Database. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"\n",
"# please update with your username, password, hostname and service_name\n",
"username = \"<username>\"\n",
"password = \"<password>\"\n",
"dsn = \"<hostname>/<service_name>\"\n",
"\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"except Exception as e:\n",
" print(\"Connection failed!\")\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's create a table and insert some sample docs to test."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"try:\n",
" cursor = conn.cursor()\n",
"\n",
" drop_table_sql = \"\"\"drop table if exists demo_tab\"\"\"\n",
" cursor.execute(drop_table_sql)\n",
"\n",
" create_table_sql = \"\"\"create table demo_tab (id number, data clob)\"\"\"\n",
" cursor.execute(create_table_sql)\n",
"\n",
" insert_row_sql = \"\"\"insert into demo_tab values (:1, :2)\"\"\"\n",
" rows_to_insert = [\n",
" (\n",
" 1,\n",
" \"If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\",\n",
" ),\n",
" (\n",
" 2,\n",
" \"A tablespace can be online (accessible) or offline (not accessible) whenever the database is open.\\nA tablespace is usually online so that its data is available to users. The SYSTEM tablespace and temporary tablespaces cannot be taken offline.\",\n",
" ),\n",
" (\n",
" 3,\n",
" \"The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table.\\nSometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\",\n",
" ),\n",
" ]\n",
" cursor.executemany(insert_row_sql, rows_to_insert)\n",
"\n",
" conn.commit()\n",
"\n",
" print(\"Table created and populated.\")\n",
" cursor.close()\n",
"except Exception as e:\n",
" print(\"Table creation failed.\")\n",
" cursor.close()\n",
" conn.close()\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Documents\n",
"The users can load the documents from Oracle Database or a file system or both. They just need to set the loader parameters accordingly. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters.\n",
"\n",
"The main benefit of using OracleDocLoader is that it can handle 150+ different file formats. You don't need to use different types of loader for different file formats. Here is the list of the formats that we support: [Oracle Text Supported Document Formats](https://docs.oracle.com/en/database/oracle/oracle-database/23/ccref/oracle-text-supported-document-formats.html)\n",
"\n",
"The following sample code will show how to do that:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_core.documents import Document\n",
"\n",
"\"\"\"\n",
"# loading a local file\n",
"loader_params = {}\n",
"loader_params[\"file\"] = \"<file>\"\n",
"\n",
"# loading from a local directory\n",
"loader_params = {}\n",
"loader_params[\"dir\"] = \"<directory>\"\n",
"\"\"\"\n",
"\n",
"# loading from Oracle Database table\n",
"loader_params = {\n",
" \"owner\": \"<owner>\",\n",
" \"tablename\": \"demo_tab\",\n",
" \"colname\": \"data\",\n",
"}\n",
"\n",
"\"\"\" load the docs \"\"\"\n",
"loader = OracleDocLoader(conn=conn, params=loader_params)\n",
"docs = loader.load()\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of docs loaded: {len(docs)}\")\n",
"# print(f\"Document-0: {docs[0].page_content}\") # content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Split Documents\n",
"The documents can be in different sizes: small, medium, large, or very large. The users like to split/chunk their documents into smaller pieces to generate embeddings. There are lots of different splitting customizations the users can do. Please refer to the Oracle AI Vector Search Guide book for complete information about these parameters.\n",
"\n",
"The following sample code will show how to do that:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_core.documents import Document\n",
"\n",
"\"\"\"\n",
"# Some examples\n",
"# split by chars, max 500 chars\n",
"splitter_params = {\"split\": \"chars\", \"max\": 500, \"normalize\": \"all\"}\n",
"\n",
"# split by words, max 100 words\n",
"splitter_params = {\"split\": \"words\", \"max\": 100, \"normalize\": \"all\"}\n",
"\n",
"# split by sentence, max 20 sentences\n",
"splitter_params = {\"split\": \"sentence\", \"max\": 20, \"normalize\": \"all\"}\n",
"\"\"\"\n",
"\n",
"# split by default parameters\n",
"splitter_params = {\"normalize\": \"all\"}\n",
"\n",
"# get the splitter instance\n",
"splitter = OracleTextSplitter(conn=conn, params=splitter_params)\n",
"\n",
"list_chunks = []\n",
"for doc in docs:\n",
" chunks = splitter.split_text(doc.page_content)\n",
" list_chunks.extend(chunks)\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of Chunks: {len(list_chunks)}\")\n",
"# print(f\"Chunk-0: {list_chunks[0]}\") # content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### End to End Demo\n",
"Please refer to our complete demo guide [Oracle AI Vector Search End-to-End Demo Guide](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) to build an end to end RAG pipeline with the help of Oracle AI Vector Search.\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -6,17 +6,19 @@
"source": [
"# Pebblo Safe DocumentLoader\n",
"\n",
"> [Pebblo](https://github.com/daxa-ai/pebblo) enables developers to safely load data and promote their Gen AI app to deployment without worrying about the organizations compliance and security requirements. The project identifies semantic topics and entities found in the loaded data and summarizes them on the UI or a PDF report.\n",
"> [Pebblo](https://daxa-ai.github.io/pebblo/) enables developers to safely load data and promote their Gen AI app to deployment without worrying about the organizations compliance and security requirements. The project identifies semantic topics and entities found in the loaded data and summarizes them on the UI or a PDF report.\n",
"\n",
"Pebblo has two components.\n",
"\n",
"1. Pebblo Safe DocumentLoader for Langchain\n",
"1. Pebblo Daemon\n",
"1. Pebblo Server\n",
"\n",
"This document describes how to augment your existing Langchain DocumentLoader with Pebblo Safe DocumentLoader to get deep data visibility on the types of Topics and Entities ingested into the Gen-AI Langchain application. For details on `Pebblo Daemon` see this [pebblo daemon](https://daxa-ai.github.io/pebblo-docs/daemon.html) document.\n",
"This document describes how to augment your existing Langchain DocumentLoader with Pebblo Safe DocumentLoader to get deep data visibility on the types of Topics and Entities ingested into the Gen-AI Langchain application. For details on `Pebblo Server` see this [pebblo server](https://daxa-ai.github.io/pebblo/daemon) document.\n",
"\n",
"Pebblo Safeloader enables safe data ingestion for Langchain `DocumentLoader`. This is done by wrapping the document loader call with `Pebblo Safe DocumentLoader`.\n",
"\n",
"Note: To configure pebblo server on some url other that pebblo's default (localhost:8000) url, put the correct URL in `PEBBLO_CLASSIFIER_URL` env variable. This is configurable using the `classifier_url` keyword argument as well. Ref: [server-configurations](https://daxa-ai.github.io/pebblo/config)\n",
"\n",
"#### How to Pebblo enable Document Loading?\n",
"\n",
"Assume a Langchain RAG application snippet using `CSVLoader` to read a CSV document for inference.\n",
@@ -69,7 +71,7 @@
"source": [
"### Send semantic topics and identities to Pebblo cloud server\n",
"\n",
"To send semantic data to pebblo-cloud, pass api-key to PebbloSafeLoader as an argument or alternatively, put the api-ket in `PEBBLO_API_KEY` environment variable."
"To send semantic data to pebblo-cloud, pass api-key to PebbloSafeLoader as an argument or alternatively, put the api-key in `PEBBLO_API_KEY` environment variable."
]
},
{
@@ -91,6 +93,41 @@
"documents = loader.load()\n",
"print(documents)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add semantic topics and identities to loaded metadata\n",
"\n",
"To add semantic topics and sematic entities to metadata of loaded documents, set load_semantic to True as an argument or alternatively, define a new environment variable `PEBBLO_LOAD_SEMANTIC`, and setting it to True."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders.csv_loader import CSVLoader\n",
"from langchain_community.document_loaders import PebbloSafeLoader\n",
"\n",
"loader = PebbloSafeLoader(\n",
" CSVLoader(\"data/corp_sens_data.csv\"),\n",
" name=\"acme-corp-rag-1\", # App name (Mandatory)\n",
" owner=\"Joe Smith\", # Owner (Optional)\n",
" description=\"Support productivity RAG application\", # Description (Optional)\n",
" api_key=\"my-api-key\", # API key (Optional, can be set in the environment variable PEBBLO_API_KEY)\n",
" load_semantic=True, # Load semantic data (Optional, default is False, can be set in the environment variable PEBBLO_LOAD_SEMANTIC)\n",
")\n",
"documents = loader.load()\n",
"print(documents[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {

View File

@@ -0,0 +1,95 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Spider\n",
"[Spider](https://spider.cloud/) is the [fastest](https://github.com/spider-rs/spider/blob/main/benches/BENCHMARKS.md) and most affordable crawler and scraper that returns LLM-ready data.\n",
"\n",
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pip install spider-client"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage\n",
"To use spider you need to have an API key from [spider.cloud](https://spider.cloud/)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='Spider - Fastest Web Crawler built for AI Agents and Large Language Models[Spider v1 Logo Spider ](/)The World\\'s Fastest and Cheapest Crawler API==========View Demo* Basic* StreamingExample requestPythonCopy```import requests, osheaders = { \\'Authorization\\': os.environ[\"SPIDER_API_KEY\"], \\'Content-Type\\': \\'application/json\\',}json_data = {\"limit\":50,\"url\":\"http://www.example.com\"}response = requests.post(\\'https://api.spider.cloud/crawl\\', headers=headers, json=json_data)print(response.json())```Example ResponseScrape with no headaches----------* Proxy rotations* Agent headers* Avoid anti-bot detections* Headless chrome* Markdown LLM ResponsesThe Fastest Web Crawler----------* Powered by [spider-rs](https://github.com/spider-rs/spider)* Do 20,000 pages in seconds* Full concurrency* Powerful and simple API* Cost effectiveScrape Anything with AI----------* Custom scripting browser* Custom data extraction* Data pipelines* Detailed insights* Advanced labeling[API](/docs/api) [Price](/credits/new) [Guides](/guides) [About](/about) [Docs](https://docs.rs/spider/latest/spider/) [Privacy](/privacy) [Terms](/eula)© 2024 Spider from A11yWatchTheme Light Dark Toggle Theme [GitHubGithub](https://github.com/spider-rs/spider)', metadata={'description': 'Collect data rapidly from any website. Seamlessly scrape websites and get data tailored for LLM workloads.', 'domain': 'spider.cloud', 'extracted_data': None, 'file_size': 33743, 'keywords': None, 'pathname': '/', 'resource_type': 'html', 'title': 'Spider - Fastest Web Crawler built for AI Agents and Large Language Models', 'url': '48f1bc3c-3fbb-408a-865b-c191a1bb1f48/spider.cloud/index.html', 'user_id': '48f1bc3c-3fbb-408a-865b-c191a1bb1f48'})]\n"
]
}
],
"source": [
"from langchain_community.document_loaders import SpiderLoader\n",
"\n",
"loader = SpiderLoader(\n",
" api_key=\"YOUR_API_KEY\",\n",
" url=\"https://spider.cloud\",\n",
" mode=\"scrape\", # if no API key is provided it looks for SPIDER_API_KEY in env\n",
")\n",
"\n",
"data = loader.load()\n",
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Modes\n",
"- `scrape`: Default mode that scrapes a single URL\n",
"- `crawl`: Crawl all subpages of the domain url provided"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Crawler options\n",
"The `params` parameter is a dictionary that can be passed to the loader. See the [Spider documentation](https://spider.cloud/docs/api) to see all available parameters"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,126 @@
{
"cells": [
{
"cell_type": "raw",
"id": "910f5772b6af13c9",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"---\n",
"sidebar_label: Upstage\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "433f5422ad8e1efa",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"# UpstageLayoutAnalysisLoader\n",
"\n",
"This notebook covers how to get started with `UpstageLayoutAnalysisLoader`.\n",
"\n",
"## Installation\n",
"\n",
"Install `langchain-upstage` package.\n",
"\n",
"```bash\n",
"pip install -U langchain-upstage\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "e6e5941c",
"metadata": {},
"source": [
"## Environment Setup\n",
"\n",
"Make sure to set the following environment variables:\n",
"\n",
"- `UPSTAGE_API_KEY`: Your Upstage API key. Read [Upstage developers document](https://developers.upstage.ai/docs/getting-started/quick-start) to get your API key.\n",
"\n",
"> The previously used UPSTAGE_DOCUMENT_AI_API_KEY is deprecated. However, the key previously used in UPSTAGE_DOCUMENT_AI_API_KEY can now be used in UPSTAGE_API_KEY."
]
},
{
"cell_type": "markdown",
"id": "21e72f3d",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a05efd34",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"UPSTAGE_API_KEY\"] = \"YOUR_API_KEY\""
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2b914a7b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective\\nDepth Up-Scaling Dahyun Kim* , Chanjun Park*1, Sanghoon Kim*+, Wonsung Lee*†, Wonho Song*\\nYunsu Kim* , Hyeonwoo Kim* , Yungi Kim, Hyeonju Lee, Jihoo Kim\\nChangbae Ahn, Seonghoon Yang, Sukyung Lee, Hyunbyung Park, Gyoungjin Gim\\nMikyoung Cha, Hwalsuk Leet , Sunghun Kim+ Upstage AI, South Korea {kdahyun, chan jun · park, limerobot, wonsung · lee, hwalsuk lee, hunkim} @ upstage · ai Abstract We introduce SOLAR 10.7B, a large language\\nmodel (LLM) with 10.7 billion parameters,\\ndemonstrating superior performance in various\\nnatural language processing (NLP) tasks. In-\\nspired by recent efforts to efficiently up-scale\\nLLMs, we present a method for scaling LLMs\\ncalled depth up-scaling (DUS), which encom-\\npasses depthwise scaling and continued pre-\\ntraining. In contrast to other LLM up-scaling\\nmethods that use mixture-of-experts, DUS does\\nnot require complex changes to train and infer-\\nence efficiently. We show experimentally that\\nDUS is simple yet effective in scaling up high-\\nperformance LLMs from small ones. Building\\non the DUS model, we additionally present SO-\\nLAR 10.7B-Instruct, a variant fine-tuned for\\ninstruction-following capabilities, surpassing\\nMixtral-8x7B-Instruct. SOLAR 10.7B is pub-\\nlicly available under the Apache 2.0 license,\\npromoting broad access and application in the\\nLLM field 1 1 Introduction The field of natural language processing (NLP)\\nhas been significantly transformed by the introduc-\\ntion of large language models (LLMs), which have\\nenhanced our understanding and interaction with\\nhuman language (Zhao et al., 2023). These ad-\\nvancements bring challenges such as the increased\\nneed to train ever larger models (Rae et al., 2021;\\nWang et al., 2023; Pan et al., 2023; Lian, 2023;\\nYao et al., 2023; Gesmundo and Maile, 2023) OW-\\ning to the performance scaling law (Kaplan et al.,\\n2020; Hernandez et al., 2021; Anil et al., 2023;\\nKaddour et al., 2023). To efficiently tackle the\\nabove, recent works in scaling language models\\nsuch as a mixture of experts (MoE) (Shazeer et al.,\\n2017; Komatsuzaki et al., 2022) have been pro-\\nposed. While those approaches are able to effi- ciently and effectively scale-up LLMs, they often\\nrequire non-trivial changes to the training and infer-\\nence framework (Gale et al., 2023), which hinders\\nwidespread applicability. Effectively and efficiently\\nscaling up LLMs whilst also retaining the simplic-\\nity for ease of use is an important problem (Alberts\\net al., 2023; Fraiwan and Khasawneh, 2023; Sallam\\net al., 2023; Bahrini et al., 2023). Inspired by Komatsuzaki et al. (2022), we\\npresent depth up-scaling (DUS), an effective and\\nefficient method to up-scale LLMs whilst also re-\\nmaining straightforward to use. DUS consists of\\nscaling the number of layers in the base model and\\ncontinually pretraining the scaled model. Unlike\\n(Komatsuzaki et al., 2022), DUS does not scale\\nthe model using MoE and rather use a depthwise\\nscaling method analogous to Tan and Le (2019)\\nwhich is adapted for the LLM architecture. Thus,\\nthere are no additional modules or dynamism as\\nwith MoE, making DUS immediately compatible\\nwith easy-to-use LLM frameworks such as Hug-\\ngingFace (Wolf et al., 2019) with no changes to\\nthe training or inference framework for maximal\\nefficiency. Furthermore, DUS is applicable to all\\ntransformer architectures, opening up new gate-\\nways to effectively and efficiently scale-up LLMs\\nin a simple manner. Using DUS, we release SO-\\nLAR 10.7B, an LLM with 10.7 billion parameters,\\nthat outperforms existing models like Llama 2 (Tou-\\nvron et al., 2023) and Mistral 7B (Jiang et al., 2023)\\nin various benchmarks. We have also developed SOLAR 10.7B-Instruct,\\na variant fine-tuned for tasks requiring strict adher-\\nence to complex instructions. It significantly out-\\nperforms the Mixtral-8x7B-Instruct model across\\nvarious evaluation metrics, evidencing an advanced\\nproficiency that exceeds the capabilities of even\\nlarger models in terms of benchmark performance. * Equal Contribution 1 Corresponding Author\\nhttps : / /huggingface.co/upstage/\\nSOLAR-1 0 · 7B-v1 . 0 By releasing SOLAR 10.7B under the Apache\\n2.0 license, we aim to promote collaboration and in-\\nnovation in NLP. This open-source approach allows 2024\\nApr\\n4\\n[cs.CL]\\narxiv:2...117.7.13' metadata={'page': 1, 'type': 'text', 'split': 'page'}\n",
"page_content=\"Step 1-1 Step 1-2\\nOutput Output Output\\nOutput Output Output\\n24 Layers 24Layers\\nMerge\\n8Layers\\n---- 48 Layers\\nCopy\\n8 Layers Continued\\n32Layers 32Layers\\nPretraining\\n24Layers\\n24 Layers Input\\nInput Input Input Input Input\\nStep 1. Depthwise Scaling Step2. Continued Pretraining Figure 1: Depth up-scaling for the case with n = 32, s = 48, and m = 8. Depth up-scaling is achieved through a\\ndual-stage process of depthwise scaling followed by continued pretraining. for wider access and application of these models\\nby researchers and developers globally. 2 Depth Up-Scaling To efficiently scale-up LLMs, we aim to utilize pre-\\ntrained weights of base models to scale up to larger\\nLLMs (Komatsuzaki et al., 2022). While exist-\\ning methods such as Komatsuzaki et al. (2022) use\\nMoE (Shazeer et al., 2017) to scale-up the model ar-\\nchitecture, we opt for a different depthwise scaling\\nstrategy inspired by Tan and Le (2019). We then\\ncontinually pretrain the scaled model as just scaling\\nthe model without further pretraining degrades the\\nperformance. Base model. Any n-layer transformer architec-\\nture can be used but we select the 32-layer Llama\\n2 architecture as our base model. We initialize the\\nLlama 2 architecture with pretrained weights from\\nMistral 7B, as it is one of the top performers com-\\npatible with the Llama 2 architecture. By adopting\\nthe Llama 2 architecture for our base model, we\\naim to leverage the vast pool of community re-\\nsources while introducing novel modifications to\\nfurther enhance its capabilities. Depthwise scaling. From the base model with n\\nlayers, we set the target layer count s for the scaled\\nmodel, which is largely dictated by the available\\nhardware. With the above, the depthwise scaling process\\nis as follows. The base model with n layers is\\nduplicated for subsequent modification. Then, we\\nremove the final m layers from the original model\\nand the initial m layers from its duplicate, thus\\nforming two distinct models with n - m layers.\\nThese two models are concatenated to form a scaled\\nmodel with s = 2· (n-m) layers. Note that n = 32\\nfrom our base model and we set s = 48 considering our hardware constraints and the efficiency of the\\nscaled model, i.e., fitting between 7 and 13 billion\\nparameters. Naturally, this leads to the removal of\\nm = 8 layers. The depthwise scaling process with\\nn = 32, s = 48, and m = 8 is depicted in 'Step 1:\\nDepthwise Scaling' of Fig. 1. We note that a method in the community that also\\n2 'Step 1:\\nscale the model in the same manner as\\nDepthwise Scaling' of Fig. 1 has been concurrently\\ndeveloped. Continued pretraining. The performance of the\\ndepthwise scaled model initially drops below that\\nof the base LLM. Thus, we additionally apply\\nthe continued pretraining step as shown in 'Step\\n2: Continued Pretraining' of Fig. 1. Experimen-\\ntally, we observe rapid performance recovery of\\nthe scaled model during continued pretraining, a\\nphenomenon also observed in Komatsuzaki et al.\\n(2022). We consider that the particular way of\\ndepthwise scaling has isolated the heterogeneity\\nin the scaled model which allowed for this fast\\nperformance recovery. Delving deeper into the heterogeneity of the\\nscaled model, a simpler alternative to depthwise\\nscaling could be to just repeat its layers once more,\\ni.e., from n to 2n layers. Then, the 'layer distance',\\nor the difference in the layer indices in the base\\nmodel, is only bigger than 1 where layers n and\\nn + 1 are connected, i.e., at the seam. However, this results in maximum layer distance\\nat the seam, which may be too significant of a\\ndiscrepancy for continued pretraining to quickly\\nresolve. Instead, depthwise scaling sacrifices the\\n2m middle layers, thereby reducing the discrep-\\nancy at the seam and making it easier for continued 2https : / /huggingface · co/Undi 95/\\nMistral-11B-v0 · 1\" metadata={'page': 2, 'type': 'text', 'split': 'page'}\n",
"page_content=\"Properties Instruction Training Datasets Alignment\\n Alpaca-GPT4 OpenOrca Synth. Math-Instruct Orca DPO Pairs Ultrafeedback Cleaned Synth. Math-Alignment\\n Total # Samples 52K 2.91M 126K 12.9K 60.8K 126K\\n Maximum # Samples Used 52K 100K 52K 12.9K 60.8K 20.1K\\n Open Source O O X O O Table 1: Training datasets used for the instruction and alignment tuning stages, respectively. For the instruction\\ntuning process, we utilized the Alpaca-GPT4 (Peng et al., 2023), OpenOrca (Mukherjee et al., 2023), and Synth.\\nMath-Instruct datasets, while for the alignment tuning, we employed the Orca DPO Pairs (Intel, 2023), Ultrafeedback\\nCleaned (Cui et al., 2023; Ivison et al., 2023), and Synth. Math-Alignment datasets. The 'Total # Samples indicates\\nthe total number of samples in the entire dataset. The 'Maximum # Samples Used' indicates the actual maximum\\nnumber of samples that were used in training, which could be lower than the total number of samples in a given\\ndataset. 'Open Source' indicates whether the dataset is open-sourced. pretraining to quickly recover performance. We\\nattribute the success of DUS to reducing such dis-\\ncrepancies in both the depthwise scaling and the\\ncontinued pretraining steps. We also hypothesize\\nthat other methods of depthwise scaling could also\\nwork for DUS, as long as the discrepancy in the\\nscaled model is sufficiently contained before the\\ncontinued pretraining step. Comparison to other up-scaling methods. Un-\\nlike Komatsuzaki et al. (2022), depthwise scaled\\nmodels do not require additional modules like gat-\\ning networks or dynamic expert selection. Conse-\\nquently, scaled models in DUS do not necessitate\\na distinct training framework for optimal training\\nefficiency, nor do they require specialized CUDA\\nkernels for fast inference. A DUS model can seam-\\nlessly integrate into existing training and inference\\nframeworks while maintaining high efficiency. 3 Training Details After DUS, including continued pretraining, we\\nperform fine-tuning of SOLAR 10.7B in two stages:\\n1) instruction tuning and 2) alignment tuning. Instruction tuning. In the instruction tuning\\nstage, the model is trained to follow instructions in\\na QA format (Zhang et al., 2023). We mostly use\\nopen-source datasets but also synthesize a math QA\\ndataset to enhance the model's mathematical capa-\\nbilities. A rundown of how we crafted the dataset is\\nas follows. First, seed math data are collected from\\nthe Math (Hendrycks et al., 2021) dataset only, to\\navoid contamination with commonly used bench-\\nmark datasets such as GSM8K (Cobbe et al., 2021).\\nThen, using a process similar to MetaMath (Yu\\net al., 2023), we rephrase the questions and an-\\nswers of the seed math data. We use the resulting\\nrephrased question-answer pairs as a QA dataset and call it 'Synth. Math-Instruct*. Alignment tuning. In the alignment tuning stage,\\nthe instruction-tuned model is further fine-tuned\\nto be more aligned with human or strong AI\\n(e.g., GPT4 (OpenAI, 2023)) preferences using\\nsDPO (Kim et al., 2024a), an improved version\\nof direct preference optimization (DPO) (Rafailov\\net al., 2023). Similar to the instruction tuning stage,\\nwe use mostly open-source datasets but also syn-\\nthesize a math-focused alignment dataset utilizing\\nthe 'Synth. Math-Instruct' dataset mentioned in the\\ninstruction tuning stage. The alignment data synthesis process is as\\nfollows. We take advantage of the fact that\\nthe rephrased question-answer pairs in Synth.\\nMath-Instruct data are beneficial in enhancing the\\nmodel's mathematical capabilities (see Sec. 4.3.1).\\nThus, we speculate that the rephrased answer to the\\nrephrased question is a better answer than the orig-\\ninal answer, possibly due to the interim rephrasing\\nstep. Consequently, we set the rephrased question\\nas the prompt and use the rephrased answer as the\\nchosen response and the original answer as the re-\\njected response and create the {prompt, chosen,\\nrejected} DPO tuple. We aggregate the tuples from\\nthe rephrased question-answer pairs and call the\\nresulting dataset 'Synth. Math-Alignment*. 4 Results 4.1 Experimental Details Training datasets. We present details regarding\\nour training datasets for the instruction and align-\\nment tuning stages in Tab. 1. We do not always\\nuse the entire dataset and instead subsample a set\\namount. Note that most of our training data is\\nopen-source, and the undisclosed datasets can be\\nsubstituted for open-source alternatives such as the\" metadata={'page': 3, 'type': 'text', 'split': 'page'}\n"
]
}
],
"source": [
"from langchain_upstage import UpstageLayoutAnalysisLoader\n",
"\n",
"file_path = \"/PATH/TO/YOUR/FILE.pdf\"\n",
"layzer = UpstageLayoutAnalysisLoader(file_path, split=\"page\")\n",
"\n",
"# For improved memory efficiency, consider using the lazy_load method to load documents page by page.\n",
"docs = layzer.load() # or layzer.lazy_load()\n",
"\n",
"for doc in docs[:3]:\n",
" print(doc)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -9,7 +9,7 @@
"\n",
"This covers how to use `WebBaseLoader` to load all text from `HTML` webpages into a document format that we can use downstream. For more custom logic for loading webpages look at some child class examples such as `IMSDbLoader`, `AZLyricsLoader`, and `CollegeConfidentialLoader`. \n",
"\n",
"If you don't want to worry about website crawling, bypassing JS-blocking sites, and data cleaning, consider using `FireCrawlLoader`.\n"
"If you don't want to worry about website crawling, bypassing JS-blocking sites, and data cleaning, consider using `FireCrawlLoader` or the faster option `SpiderLoader`.\n"
]
},
{

View File

@@ -16,7 +16,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "9ec8a3b3",
"metadata": {
"tags": []
@@ -28,14 +28,15 @@
},
{
"cell_type": "code",
"outputs": [],
"source": [
"loader = YuqueLoader(access_token=\"<your_personal_access_token>\")"
],
"execution_count": 2,
"id": "2ea958f0327ed6e8",
"metadata": {
"collapsed": false
},
"id": "2ea958f0327ed6e8"
"outputs": [],
"source": [
"loader = YuqueLoader(access_token=\"<your_personal_access_token>\")"
]
},
{
"cell_type": "code",
@@ -69,7 +70,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,793 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "mZaeRH_SjJWK"
},
"source": [
"# Google Cloud Vertex AI Reranker\n",
"\n",
"> The [Vertex Search Ranking API](https://cloud.google.com/generative-ai-app-builder/docs/ranking) is one of the standalone APIs in [Vertex AI Agent Builder](https://cloud.google.com/generative-ai-app-builder/docs/builder-apis). It takes a list of documents and reranks those documents based on how relevant the documents are to a query. Compared to embeddings, which look only at the semantic similarity of a document and a query, the ranking API can give you precise scores for how well a document answers a given query. The ranking API can be used to improve the quality of search results after retrieving an initial set of candidate documents.\n",
"\n",
">The ranking API is stateless so there's no need to index documents before calling the API. All you need to do is pass in the query and documents. This makes the API well suited for reranking documents from any document retrievers.\n",
"\n",
">For more information, see [Rank and rerank documents](https://cloud.google.com/generative-ai-app-builder/docs/ranking)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "w51yJNBAirPZ"
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-community langchain-google-community langchain-google-community[vertexaisearch] langchain-google-vertexai langchain-chroma langchain-text-splitters"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5sN2qvW0Wxwj"
},
"source": [
"### Setup"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"id": "axookyKSnl3G"
},
"outputs": [],
"source": [
"PROJECT_ID = \"\"\n",
"REGION = \"\"\n",
"RANKING_LOCATION_ID = \"global\" # @param {type:\"string\"}\n",
"\n",
"# Initialize GCP project for Vertex AI\n",
"from google.cloud import aiplatform\n",
"\n",
"aiplatform.init(project=PROJECT_ID, location=REGION)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7xie5peQW2Lf"
},
"source": [
"### Load and Prepare data\n",
"\n",
"For this example, we will be using the [Google Wiki page](https://en.wikipedia.org/wiki/Google)to demonstrate how the Vertex Ranking API works.\n",
"\n",
"We use a standard pipeline of `load -> split -> embed data`.\n",
"\n",
"The embeddings are created using the [Vertex Embeddings API](https://cloud.google.com/vertex-ai/generative-ai/docs/embeddings/get-text-embeddings#supported_models) model - `textembedding-gecko@003`"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "3yY5reMbkbFS",
"outputId": "e124299b-0fa2-4acd-aaec-d5361f008d97"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Your 1 documents have been split into 266 chunks\n"
]
}
],
"source": [
"from langchain_chroma import Chroma\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_google_vertexai import VertexAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"vectordb = None\n",
"\n",
"# Load wiki page\n",
"loader = WebBaseLoader(\"https://en.wikipedia.org/wiki/Google\")\n",
"data = loader.load()\n",
"\n",
"# Split doc into chunks\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=800, chunk_overlap=5)\n",
"splits = text_splitter.split_documents(data)\n",
"\n",
"print(f\"Your {len(data)} documents have been split into {len(splits)} chunks\")\n",
"\n",
"if vectordb is not None: # delete existing vectordb if it already exists\n",
" vectordb.delete_collection()\n",
"\n",
"embedding = VertexAIEmbeddings(model_name=\"textembedding-gecko@003\")\n",
"vectordb = Chroma.from_documents(documents=splits, embedding=embedding)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "jNmGwvrqnFF1"
},
"outputs": [],
"source": [
"import pandas as pd\n",
"from langchain.retrievers.contextual_compression import ContextualCompressionRetriever\n",
"from langchain_google_community.vertex_rank import VertexAIRank\n",
"\n",
"# Instantiate the VertexAIReranker with the SDK manager\n",
"reranker = VertexAIRank(\n",
" project_id=PROJECT_ID,\n",
" location_id=RANKING_LOCATION_ID,\n",
" ranking_config=\"default_ranking_config\",\n",
" title_field=\"source\",\n",
" top_n=5,\n",
")\n",
"\n",
"basic_retriever = vectordb.as_retriever(search_kwargs={\"k\": 5}) # fetch top 5 documents\n",
"\n",
"# Create the ContextualCompressionRetriever with the VertexAIRanker as a Reranker\n",
"retriever_with_reranker = ContextualCompressionRetriever(\n",
" base_compressor=reranker, base_retriever=basic_retriever\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uMOPl7ji_nU_"
},
"source": [
"### Testing out the Vertex Ranking API\n",
"\n",
"Let's query both the `basic_retriever` and `retriever_with_reranker` with the same query and compare the retrieved documents.\n",
"\n",
"The Ranking API takes in the input from the `basic_retriever` and passes it to the Ranking API.\n",
"\n",
"The ranking API is used to improve the quality of the ranking and determine a score that indicates the relevance of each record to the query.\n",
"\n",
"You can see the difference between the Unranked and the Ranked Documents. The Ranking API moves the most semantically relevant documents to the top of the context window of the LLM thus helping it form a better answer with reasoning."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 484
},
"id": "sJDkepoYoc0t",
"outputId": "eac41585-3d53-4dd9-da16-51ec47eedfec"
},
"outputs": [
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"summary": "{\n \"name\": \"comparison_df\",\n \"rows\": 5,\n \"fields\": [\n {\n \"column\": \"Unranked Documents\",\n \"properties\": {\n \"dtype\": \"string\",\n \"num_unique_values\": 5,\n \"samples\": [\n \"Eventually, they changed the name to Google; the name of the search engine was a misspelling of the word googol,[21][36][37] a very large number written 10100 (1 followed by 100 zeros), picked to signify that the search engine was intended to provide large quantities of information.[38]\",\n \"^ Swant, Marty. \\\"The World's Valuable Brands\\\". Forbes. Archived from the original on October 18, 2020. Retrieved January 19, 2022.\\n\\n^ \\\"Best Global Brands\\\". Interbrand. Archived from the original on February 1, 2022. Retrieved March 7, 2011.\\n\\n^ a b c d \\\"How we started and where we are today \\u2013 Google\\\". about.google. Archived from the original on April 22, 2020. Retrieved April 24, 2021.\\n\\n^ Brezina, Corona (2013). Sergey Brin, Larry Page, Eric Schmidt, and Google (1st\\u00a0ed.). New York: Rosen Publishing Group. p.\\u00a018. ISBN\\u00a0978-1-4488-6911-4. LCCN\\u00a02011039480.\\n\\n^ a b c \\\"Our history in depth\\\". Google Company. Archived from the original on April 1, 2012. Retrieved July 15, 2017.\",\n \"The name \\\"Google\\\" originated from a misspelling of \\\"googol\\\",[211][212] which refers to the number represented by a 1 followed by one-hundred zeros. Page and Brin write in their original paper on PageRank:[33] \\\"We chose our system name, Google, because it is a common spelling of googol, or 10100[,] and fits well with our goal of building very large-scale search engines.\\\" Having found its way increasingly into everyday language, the verb \\\"google\\\" was added to the Merriam Webster Collegiate Dictionary and the Oxford English Dictionary in 2006, meaning \\\"to use the Google search engine to obtain information on the Internet.\\\"[213][214] Google's mission statement, from the outset, was \\\"to organize the world's information and make it universally accessible and useful\\\",[215] and its unofficial\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Ranked Documents\",\n \"properties\": {\n \"dtype\": \"string\",\n \"num_unique_values\": 5,\n \"samples\": [\n \"Eventually, they changed the name to Google; the name of the search engine was a misspelling of the word googol,[21][36][37] a very large number written 10100 (1 followed by 100 zeros), picked to signify that the search engine was intended to provide large quantities of information.[38]\",\n \"^ Swant, Marty. \\\"The World's Valuable Brands\\\". Forbes. Archived from the original on October 18, 2020. Retrieved January 19, 2022.\\n\\n^ \\\"Best Global Brands\\\". Interbrand. Archived from the original on February 1, 2022. Retrieved March 7, 2011.\\n\\n^ a b c d \\\"How we started and where we are today \\u2013 Google\\\". about.google. Archived from the original on April 22, 2020. Retrieved April 24, 2021.\\n\\n^ Brezina, Corona (2013). Sergey Brin, Larry Page, Eric Schmidt, and Google (1st\\u00a0ed.). New York: Rosen Publishing Group. p.\\u00a018. ISBN\\u00a0978-1-4488-6911-4. LCCN\\u00a02011039480.\\n\\n^ a b c \\\"Our history in depth\\\". Google Company. Archived from the original on April 1, 2012. Retrieved July 15, 2017.\",\n \"^ Meijer, Bart (January 3, 2019). \\\"Google shifted $23 billion to tax haven Bermuda in 2017: filing\\\". Reuters. Archived from the original on January 3, 2019. Retrieved January 3, 2019. Google moved 19.9 billion euros ($22.7 billion) through a Dutch shell company to Bermuda in 2017, as part of an arrangement that allows it to reduce its foreign tax bill\\n\\n^ Hamburger, Tom; Gold, Matea (April 13, 2014). \\\"Google, once disdainful of lobbying, now a master of Washington influence\\\". The Washington Post. Archived from the original on October 27, 2017. Retrieved August 22, 2017.\\n\\n^ Koller, David (January 2004). \\\"Origin of the name, \\\"Google.\\\"\\\". Stanford University. Archived from the original on June 27, 2012. Retrieved May 28, 2006.\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n }\n ]\n}",
"type": "dataframe",
"variable_name": "comparison_df"
},
"text/html": [
"\n",
" <div id=\"df-43c4f5f2-c31d-4664-85dd-60cad39bd5fa\" class=\"colab-df-container\">\n",
" <div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>Unranked Documents</th>\n",
" <th>Ranked Documents</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>^ a b Brin, Sergey; Page, Lawrence (1998). \"The anatomy of a large-scale hypertextual Web search engine\" (PDF). Computer Networks and ISDN Systems. 30 (17): 107117. CiteSeerX 10.1.1.115.5930. doi:10.1016/S0169-7552(98)00110-X. ISSN 0169-7552. S2CID 7587743. Archived (PDF) from the original on September 27, 2015. Retrieved April 7, 2019.\\n\\n^ \"About: RankDex\". Archived from the original on January 20, 2012. Retrieved September 29, 2010., RankDex\\n\\n^ \"Method for node ranking in a linked database\". Google Patents. Archived from the original on October 15, 2015. Retrieved October 19, 2015.\\n\\n^ Koller, David (January 2004). \"Origin of the name \"Google\"\". Stanford University. Archived from the original on June 27, 2012.</td>\n",
" <td>The name \"Google\" originated from a misspelling of \"googol\",[211][212] which refers to the number represented by a 1 followed by one-hundred zeros. Page and Brin write in their original paper on PageRank:[33] \"We chose our system name, Google, because it is a common spelling of googol, or 10100[,] and fits well with our goal of building very large-scale search engines.\" Having found its way increasingly into everyday language, the verb \"google\" was added to the Merriam Webster Collegiate Dictionary and the Oxford English Dictionary in 2006, meaning \"to use the Google search engine to obtain information on the Internet.\"[213][214] Google's mission statement, from the outset, was \"to organize the world's information and make it universally accessible and useful\",[215] and its unofficial</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>Eventually, they changed the name to Google; the name of the search engine was a misspelling of the word googol,[21][36][37] a very large number written 10100 (1 followed by 100 zeros), picked to signify that the search engine was intended to provide large quantities of information.[38]</td>\n",
" <td>Eventually, they changed the name to Google; the name of the search engine was a misspelling of the word googol,[21][36][37] a very large number written 10100 (1 followed by 100 zeros), picked to signify that the search engine was intended to provide large quantities of information.[38]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>The name \"Google\" originated from a misspelling of \"googol\",[211][212] which refers to the number represented by a 1 followed by one-hundred zeros. Page and Brin write in their original paper on PageRank:[33] \"We chose our system name, Google, because it is a common spelling of googol, or 10100[,] and fits well with our goal of building very large-scale search engines.\" Having found its way increasingly into everyday language, the verb \"google\" was added to the Merriam Webster Collegiate Dictionary and the Oxford English Dictionary in 2006, meaning \"to use the Google search engine to obtain information on the Internet.\"[213][214] Google's mission statement, from the outset, was \"to organize the world's information and make it universally accessible and useful\",[215] and its unofficial</td>\n",
" <td>^ Meijer, Bart (January 3, 2019). \"Google shifted $23 billion to tax haven Bermuda in 2017: filing\". Reuters. Archived from the original on January 3, 2019. Retrieved January 3, 2019. Google moved 19.9 billion euros ($22.7 billion) through a Dutch shell company to Bermuda in 2017, as part of an arrangement that allows it to reduce its foreign tax bill\\n\\n^ Hamburger, Tom; Gold, Matea (April 13, 2014). \"Google, once disdainful of lobbying, now a master of Washington influence\". The Washington Post. Archived from the original on October 27, 2017. Retrieved August 22, 2017.\\n\\n^ Koller, David (January 2004). \"Origin of the name, \"Google.\"\". Stanford University. Archived from the original on June 27, 2012. Retrieved May 28, 2006.</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>^ Meijer, Bart (January 3, 2019). \"Google shifted $23 billion to tax haven Bermuda in 2017: filing\". Reuters. Archived from the original on January 3, 2019. Retrieved January 3, 2019. Google moved 19.9 billion euros ($22.7 billion) through a Dutch shell company to Bermuda in 2017, as part of an arrangement that allows it to reduce its foreign tax bill\\n\\n^ Hamburger, Tom; Gold, Matea (April 13, 2014). \"Google, once disdainful of lobbying, now a master of Washington influence\". The Washington Post. Archived from the original on October 27, 2017. Retrieved August 22, 2017.\\n\\n^ Koller, David (January 2004). \"Origin of the name, \"Google.\"\". Stanford University. Archived from the original on June 27, 2012. Retrieved May 28, 2006.</td>\n",
" <td>^ a b Brin, Sergey; Page, Lawrence (1998). \"The anatomy of a large-scale hypertextual Web search engine\" (PDF). Computer Networks and ISDN Systems. 30 (17): 107117. CiteSeerX 10.1.1.115.5930. doi:10.1016/S0169-7552(98)00110-X. ISSN 0169-7552. S2CID 7587743. Archived (PDF) from the original on September 27, 2015. Retrieved April 7, 2019.\\n\\n^ \"About: RankDex\". Archived from the original on January 20, 2012. Retrieved September 29, 2010., RankDex\\n\\n^ \"Method for node ranking in a linked database\". Google Patents. Archived from the original on October 15, 2015. Retrieved October 19, 2015.\\n\\n^ Koller, David (January 2004). \"Origin of the name \"Google\"\". Stanford University. Archived from the original on June 27, 2012.</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>^ Swant, Marty. \"The World's Valuable Brands\". Forbes. Archived from the original on October 18, 2020. Retrieved January 19, 2022.\\n\\n^ \"Best Global Brands\". Interbrand. Archived from the original on February 1, 2022. Retrieved March 7, 2011.\\n\\n^ a b c d \"How we started and where we are today Google\". about.google. Archived from the original on April 22, 2020. Retrieved April 24, 2021.\\n\\n^ Brezina, Corona (2013). Sergey Brin, Larry Page, Eric Schmidt, and Google (1st ed.). New York: Rosen Publishing Group. p. 18. ISBN 978-1-4488-6911-4. LCCN 2011039480.\\n\\n^ a b c \"Our history in depth\". Google Company. Archived from the original on April 1, 2012. Retrieved July 15, 2017.</td>\n",
" <td>^ Swant, Marty. \"The World's Valuable Brands\". Forbes. Archived from the original on October 18, 2020. Retrieved January 19, 2022.\\n\\n^ \"Best Global Brands\". Interbrand. Archived from the original on February 1, 2022. Retrieved March 7, 2011.\\n\\n^ a b c d \"How we started and where we are today Google\". about.google. Archived from the original on April 22, 2020. Retrieved April 24, 2021.\\n\\n^ Brezina, Corona (2013). Sergey Brin, Larry Page, Eric Schmidt, and Google (1st ed.). New York: Rosen Publishing Group. p. 18. ISBN 978-1-4488-6911-4. LCCN 2011039480.\\n\\n^ a b c \"Our history in depth\". Google Company. Archived from the original on April 1, 2012. Retrieved July 15, 2017.</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>\n",
" <div class=\"colab-df-buttons\">\n",
"\n",
" <div class=\"colab-df-container\">\n",
" <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-43c4f5f2-c31d-4664-85dd-60cad39bd5fa')\"\n",
" title=\"Convert this dataframe to an interactive table.\"\n",
" style=\"display:none;\">\n",
"\n",
" <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\" viewBox=\"0 -960 960 960\">\n",
" <path d=\"M120-120v-720h720v720H120Zm60-500h600v-160H180v160Zm220 220h160v-160H400v160Zm0 220h160v-160H400v160ZM180-400h160v-160H180v160Zm440 0h160v-160H620v160ZM180-180h160v-160H180v160Zm440 0h160v-160H620v160Z\"/>\n",
" </svg>\n",
" </button>\n",
"\n",
" <style>\n",
" .colab-df-container {\n",
" display:flex;\n",
" gap: 12px;\n",
" }\n",
"\n",
" .colab-df-convert {\n",
" background-color: #E8F0FE;\n",
" border: none;\n",
" border-radius: 50%;\n",
" cursor: pointer;\n",
" display: none;\n",
" fill: #1967D2;\n",
" height: 32px;\n",
" padding: 0 0 0 0;\n",
" width: 32px;\n",
" }\n",
"\n",
" .colab-df-convert:hover {\n",
" background-color: #E2EBFA;\n",
" box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
" fill: #174EA6;\n",
" }\n",
"\n",
" .colab-df-buttons div {\n",
" margin-bottom: 4px;\n",
" }\n",
"\n",
" [theme=dark] .colab-df-convert {\n",
" background-color: #3B4455;\n",
" fill: #D2E3FC;\n",
" }\n",
"\n",
" [theme=dark] .colab-df-convert:hover {\n",
" background-color: #434B5C;\n",
" box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
" filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
" fill: #FFFFFF;\n",
" }\n",
" </style>\n",
"\n",
" <script>\n",
" const buttonEl =\n",
" document.querySelector('#df-43c4f5f2-c31d-4664-85dd-60cad39bd5fa button.colab-df-convert');\n",
" buttonEl.style.display =\n",
" google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
"\n",
" async function convertToInteractive(key) {\n",
" const element = document.querySelector('#df-43c4f5f2-c31d-4664-85dd-60cad39bd5fa');\n",
" const dataTable =\n",
" await google.colab.kernel.invokeFunction('convertToInteractive',\n",
" [key], {});\n",
" if (!dataTable) return;\n",
"\n",
" const docLinkHtml = 'Like what you see? Visit the ' +\n",
" '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
" + ' to learn more about interactive tables.';\n",
" element.innerHTML = '';\n",
" dataTable['output_type'] = 'display_data';\n",
" await google.colab.output.renderOutput(dataTable, element);\n",
" const docLink = document.createElement('div');\n",
" docLink.innerHTML = docLinkHtml;\n",
" element.appendChild(docLink);\n",
" }\n",
" </script>\n",
" </div>\n",
"\n",
"\n",
"<div id=\"df-fff80078-f146-44f5-9eff-d91c9305c276\">\n",
" <button class=\"colab-df-quickchart\" onclick=\"quickchart('df-fff80078-f146-44f5-9eff-d91c9305c276')\"\n",
" title=\"Suggest charts\"\n",
" style=\"display:none;\">\n",
"\n",
"<svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
" width=\"24px\">\n",
" <g>\n",
" <path d=\"M19 3H5c-1.1 0-2 .9-2 2v14c0 1.1.9 2 2 2h14c1.1 0 2-.9 2-2V5c0-1.1-.9-2-2-2zM9 17H7v-7h2v7zm4 0h-2V7h2v10zm4 0h-2v-4h2v4z\"/>\n",
" </g>\n",
"</svg>\n",
" </button>\n",
"\n",
"<style>\n",
" .colab-df-quickchart {\n",
" --bg-color: #E8F0FE;\n",
" --fill-color: #1967D2;\n",
" --hover-bg-color: #E2EBFA;\n",
" --hover-fill-color: #174EA6;\n",
" --disabled-fill-color: #AAA;\n",
" --disabled-bg-color: #DDD;\n",
" }\n",
"\n",
" [theme=dark] .colab-df-quickchart {\n",
" --bg-color: #3B4455;\n",
" --fill-color: #D2E3FC;\n",
" --hover-bg-color: #434B5C;\n",
" --hover-fill-color: #FFFFFF;\n",
" --disabled-bg-color: #3B4455;\n",
" --disabled-fill-color: #666;\n",
" }\n",
"\n",
" .colab-df-quickchart {\n",
" background-color: var(--bg-color);\n",
" border: none;\n",
" border-radius: 50%;\n",
" cursor: pointer;\n",
" display: none;\n",
" fill: var(--fill-color);\n",
" height: 32px;\n",
" padding: 0;\n",
" width: 32px;\n",
" }\n",
"\n",
" .colab-df-quickchart:hover {\n",
" background-color: var(--hover-bg-color);\n",
" box-shadow: 0 1px 2px rgba(60, 64, 67, 0.3), 0 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
" fill: var(--button-hover-fill-color);\n",
" }\n",
"\n",
" .colab-df-quickchart-complete:disabled,\n",
" .colab-df-quickchart-complete:disabled:hover {\n",
" background-color: var(--disabled-bg-color);\n",
" fill: var(--disabled-fill-color);\n",
" box-shadow: none;\n",
" }\n",
"\n",
" .colab-df-spinner {\n",
" border: 2px solid var(--fill-color);\n",
" border-color: transparent;\n",
" border-bottom-color: var(--fill-color);\n",
" animation:\n",
" spin 1s steps(1) infinite;\n",
" }\n",
"\n",
" @keyframes spin {\n",
" 0% {\n",
" border-color: transparent;\n",
" border-bottom-color: var(--fill-color);\n",
" border-left-color: var(--fill-color);\n",
" }\n",
" 20% {\n",
" border-color: transparent;\n",
" border-left-color: var(--fill-color);\n",
" border-top-color: var(--fill-color);\n",
" }\n",
" 30% {\n",
" border-color: transparent;\n",
" border-left-color: var(--fill-color);\n",
" border-top-color: var(--fill-color);\n",
" border-right-color: var(--fill-color);\n",
" }\n",
" 40% {\n",
" border-color: transparent;\n",
" border-right-color: var(--fill-color);\n",
" border-top-color: var(--fill-color);\n",
" }\n",
" 60% {\n",
" border-color: transparent;\n",
" border-right-color: var(--fill-color);\n",
" }\n",
" 80% {\n",
" border-color: transparent;\n",
" border-right-color: var(--fill-color);\n",
" border-bottom-color: var(--fill-color);\n",
" }\n",
" 90% {\n",
" border-color: transparent;\n",
" border-bottom-color: var(--fill-color);\n",
" }\n",
" }\n",
"</style>\n",
"\n",
" <script>\n",
" async function quickchart(key) {\n",
" const quickchartButtonEl =\n",
" document.querySelector('#' + key + ' button');\n",
" quickchartButtonEl.disabled = true; // To prevent multiple clicks.\n",
" quickchartButtonEl.classList.add('colab-df-spinner');\n",
" try {\n",
" const charts = await google.colab.kernel.invokeFunction(\n",
" 'suggestCharts', [key], {});\n",
" } catch (error) {\n",
" console.error('Error during call to suggestCharts:', error);\n",
" }\n",
" quickchartButtonEl.classList.remove('colab-df-spinner');\n",
" quickchartButtonEl.classList.add('colab-df-quickchart-complete');\n",
" }\n",
" (() => {\n",
" let quickchartButtonEl =\n",
" document.querySelector('#df-fff80078-f146-44f5-9eff-d91c9305c276 button');\n",
" quickchartButtonEl.style.display =\n",
" google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
" })();\n",
" </script>\n",
"</div>\n",
"\n",
" <div id=\"id_7648ee4a-f747-429c-820f-e03d3c59f765\">\n",
" <style>\n",
" .colab-df-generate {\n",
" background-color: #E8F0FE;\n",
" border: none;\n",
" border-radius: 50%;\n",
" cursor: pointer;\n",
" display: none;\n",
" fill: #1967D2;\n",
" height: 32px;\n",
" padding: 0 0 0 0;\n",
" width: 32px;\n",
" }\n",
"\n",
" .colab-df-generate:hover {\n",
" background-color: #E2EBFA;\n",
" box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
" fill: #174EA6;\n",
" }\n",
"\n",
" [theme=dark] .colab-df-generate {\n",
" background-color: #3B4455;\n",
" fill: #D2E3FC;\n",
" }\n",
"\n",
" [theme=dark] .colab-df-generate:hover {\n",
" background-color: #434B5C;\n",
" box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
" filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
" fill: #FFFFFF;\n",
" }\n",
" </style>\n",
" <button class=\"colab-df-generate\" onclick=\"generateWithVariable('comparison_df')\"\n",
" title=\"Generate code using this dataframe.\"\n",
" style=\"display:none;\">\n",
"\n",
" <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\"viewBox=\"0 0 24 24\"\n",
" width=\"24px\">\n",
" <path d=\"M7,19H8.4L18.45,9,17,7.55,7,17.6ZM5,21V16.75L18.45,3.32a2,2,0,0,1,2.83,0l1.4,1.43a1.91,1.91,0,0,1,.58,1.4,1.91,1.91,0,0,1-.58,1.4L9.25,21ZM18.45,9,17,7.55Zm-12,3A5.31,5.31,0,0,0,4.9,8.1,5.31,5.31,0,0,0,1,6.5,5.31,5.31,0,0,0,4.9,4.9,5.31,5.31,0,0,0,6.5,1,5.31,5.31,0,0,0,8.1,4.9,5.31,5.31,0,0,0,12,6.5,5.46,5.46,0,0,0,6.5,12Z\"/>\n",
" </svg>\n",
" </button>\n",
" <script>\n",
" (() => {\n",
" const buttonEl =\n",
" document.querySelector('#id_7648ee4a-f747-429c-820f-e03d3c59f765 button.colab-df-generate');\n",
" buttonEl.style.display =\n",
" google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
"\n",
" buttonEl.onclick = () => {\n",
" google.colab.notebook.generateWithVariable('comparison_df');\n",
" }\n",
" })();\n",
" </script>\n",
" </div>\n",
"\n",
" </div>\n",
" </div>\n"
],
"text/plain": [
" Unranked Documents \\\n",
"0 ^ a b Brin, Sergey; Page, Lawrence (1998). \"The anatomy of a large-scale hypertextual Web search engine\" (PDF). Computer Networks and ISDN Systems. 30 (17): 107117. CiteSeerX 10.1.1.115.5930. doi:10.1016/S0169-7552(98)00110-X. ISSN 0169-7552. S2CID 7587743. Archived (PDF) from the original on September 27, 2015. Retrieved April 7, 2019.\\n\\n^ \"About: RankDex\". Archived from the original on January 20, 2012. Retrieved September 29, 2010., RankDex\\n\\n^ \"Method for node ranking in a linked database\". Google Patents. Archived from the original on October 15, 2015. Retrieved October 19, 2015.\\n\\n^ Koller, David (January 2004). \"Origin of the name \"Google\"\". Stanford University. Archived from the original on June 27, 2012. \n",
"1 Eventually, they changed the name to Google; the name of the search engine was a misspelling of the word googol,[21][36][37] a very large number written 10100 (1 followed by 100 zeros), picked to signify that the search engine was intended to provide large quantities of information.[38] \n",
"2 The name \"Google\" originated from a misspelling of \"googol\",[211][212] which refers to the number represented by a 1 followed by one-hundred zeros. Page and Brin write in their original paper on PageRank:[33] \"We chose our system name, Google, because it is a common spelling of googol, or 10100[,] and fits well with our goal of building very large-scale search engines.\" Having found its way increasingly into everyday language, the verb \"google\" was added to the Merriam Webster Collegiate Dictionary and the Oxford English Dictionary in 2006, meaning \"to use the Google search engine to obtain information on the Internet.\"[213][214] Google's mission statement, from the outset, was \"to organize the world's information and make it universally accessible and useful\",[215] and its unofficial \n",
"3 ^ Meijer, Bart (January 3, 2019). \"Google shifted $23 billion to tax haven Bermuda in 2017: filing\". Reuters. Archived from the original on January 3, 2019. Retrieved January 3, 2019. Google moved 19.9 billion euros ($22.7 billion) through a Dutch shell company to Bermuda in 2017, as part of an arrangement that allows it to reduce its foreign tax bill\\n\\n^ Hamburger, Tom; Gold, Matea (April 13, 2014). \"Google, once disdainful of lobbying, now a master of Washington influence\". The Washington Post. Archived from the original on October 27, 2017. Retrieved August 22, 2017.\\n\\n^ Koller, David (January 2004). \"Origin of the name, \"Google.\"\". Stanford University. Archived from the original on June 27, 2012. Retrieved May 28, 2006. \n",
"4 ^ Swant, Marty. \"The World's Valuable Brands\". Forbes. Archived from the original on October 18, 2020. Retrieved January 19, 2022.\\n\\n^ \"Best Global Brands\". Interbrand. Archived from the original on February 1, 2022. Retrieved March 7, 2011.\\n\\n^ a b c d \"How we started and where we are today Google\". about.google. Archived from the original on April 22, 2020. Retrieved April 24, 2021.\\n\\n^ Brezina, Corona (2013). Sergey Brin, Larry Page, Eric Schmidt, and Google (1st ed.). New York: Rosen Publishing Group. p. 18. ISBN 978-1-4488-6911-4. LCCN 2011039480.\\n\\n^ a b c \"Our history in depth\". Google Company. Archived from the original on April 1, 2012. Retrieved July 15, 2017. \n",
"\n",
" Ranked Documents \n",
"0 The name \"Google\" originated from a misspelling of \"googol\",[211][212] which refers to the number represented by a 1 followed by one-hundred zeros. Page and Brin write in their original paper on PageRank:[33] \"We chose our system name, Google, because it is a common spelling of googol, or 10100[,] and fits well with our goal of building very large-scale search engines.\" Having found its way increasingly into everyday language, the verb \"google\" was added to the Merriam Webster Collegiate Dictionary and the Oxford English Dictionary in 2006, meaning \"to use the Google search engine to obtain information on the Internet.\"[213][214] Google's mission statement, from the outset, was \"to organize the world's information and make it universally accessible and useful\",[215] and its unofficial \n",
"1 Eventually, they changed the name to Google; the name of the search engine was a misspelling of the word googol,[21][36][37] a very large number written 10100 (1 followed by 100 zeros), picked to signify that the search engine was intended to provide large quantities of information.[38] \n",
"2 ^ Meijer, Bart (January 3, 2019). \"Google shifted $23 billion to tax haven Bermuda in 2017: filing\". Reuters. Archived from the original on January 3, 2019. Retrieved January 3, 2019. Google moved 19.9 billion euros ($22.7 billion) through a Dutch shell company to Bermuda in 2017, as part of an arrangement that allows it to reduce its foreign tax bill\\n\\n^ Hamburger, Tom; Gold, Matea (April 13, 2014). \"Google, once disdainful of lobbying, now a master of Washington influence\". The Washington Post. Archived from the original on October 27, 2017. Retrieved August 22, 2017.\\n\\n^ Koller, David (January 2004). \"Origin of the name, \"Google.\"\". Stanford University. Archived from the original on June 27, 2012. Retrieved May 28, 2006. \n",
"3 ^ a b Brin, Sergey; Page, Lawrence (1998). \"The anatomy of a large-scale hypertextual Web search engine\" (PDF). Computer Networks and ISDN Systems. 30 (17): 107117. CiteSeerX 10.1.1.115.5930. doi:10.1016/S0169-7552(98)00110-X. ISSN 0169-7552. S2CID 7587743. Archived (PDF) from the original on September 27, 2015. Retrieved April 7, 2019.\\n\\n^ \"About: RankDex\". Archived from the original on January 20, 2012. Retrieved September 29, 2010., RankDex\\n\\n^ \"Method for node ranking in a linked database\". Google Patents. Archived from the original on October 15, 2015. Retrieved October 19, 2015.\\n\\n^ Koller, David (January 2004). \"Origin of the name \"Google\"\". Stanford University. Archived from the original on June 27, 2012. \n",
"4 ^ Swant, Marty. \"The World's Valuable Brands\". Forbes. Archived from the original on October 18, 2020. Retrieved January 19, 2022.\\n\\n^ \"Best Global Brands\". Interbrand. Archived from the original on February 1, 2022. Retrieved March 7, 2011.\\n\\n^ a b c d \"How we started and where we are today Google\". about.google. Archived from the original on April 22, 2020. Retrieved April 24, 2021.\\n\\n^ Brezina, Corona (2013). Sergey Brin, Larry Page, Eric Schmidt, and Google (1st ed.). New York: Rosen Publishing Group. p. 18. ISBN 978-1-4488-6911-4. LCCN 2011039480.\\n\\n^ a b c \"Our history in depth\". Google Company. Archived from the original on April 1, 2012. Retrieved July 15, 2017. "
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas as pd\n",
"\n",
"# Use the basic_retriever and the retriever_with_reranker to get relevant documents\n",
"query = \"how did the name google originate?\"\n",
"retrieved_docs = basic_retriever.invoke(query)\n",
"reranked_docs = retriever_with_reranker.invoke(query)\n",
"\n",
"# Create two lists of results for unranked and ranked docs\n",
"unranked_docs_content = [docs.page_content for docs in retrieved_docs]\n",
"ranked_docs_content = [docs.page_content for docs in reranked_docs]\n",
"\n",
"# Create a comparison DataFrame using the padded lists\n",
"comparison_df = pd.DataFrame(\n",
" {\n",
" \"Unranked Documents\": unranked_docs_content,\n",
" \"Ranked Documents\": ranked_docs_content,\n",
" }\n",
")\n",
"\n",
"comparison_df"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ud_cnGszb1i9"
},
"source": [
"Let's inspect a couple of reranked documents. We observe that the retriever still returns the relevant Langchain type [documents](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) but as part of the metadata field, we also recieve the `relevance_score` from the Ranking API."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 225
},
"id": "FCDvNjPuAYVv",
"outputId": "23454993-0251-457b-8733-bd413e1b1043"
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 0\n",
"page_content='The name \"Google\" originated from a misspelling of \"googol\",[211][212] which refers to the number represented by a 1 followed by one-hundred zeros. Page and Brin write in their original paper on PageRank:[33] \"We chose our system name, Google, because it is a common spelling of googol, or 10100[,] and fits well with our goal of building very large-scale search engines.\" Having found its way increasingly into everyday language, the verb \"google\" was added to the Merriam Webster Collegiate Dictionary and the Oxford English Dictionary in 2006, meaning \"to use the Google search engine to obtain information on the Internet.\"[213][214] Google\\'s mission statement, from the outset, was \"to organize the world\\'s information and make it universally accessible and useful\",[215] and its unofficial' metadata={'id': '2', 'relevance_score': 0.9800000190734863, 'source': 'https://en.wikipedia.org/wiki/Google'}\n",
"----------------------------------------------------------\n",
"\n",
"Document 1\n",
"page_content='Eventually, they changed the name to Google; the name of the search engine was a misspelling of the word googol,[21][36][37] a very large number written 10100 (1 followed by 100 zeros), picked to signify that the search engine was intended to provide large quantities of information.[38]' metadata={'id': '1', 'relevance_score': 0.75, 'source': 'https://en.wikipedia.org/wiki/Google'}\n",
"----------------------------------------------------------\n",
"\n"
]
}
],
"source": [
"for i in range(2):\n",
" print(f\"Document {i}\")\n",
" print(reranked_docs[i])\n",
" print(\"----------------------------------------------------------\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "hELRT4bMeqcs"
},
"source": [
"### Putting it all together\n",
"\n",
"This shows an example of a complete RAG chain with a simple prompt template on how you can perform reranking using the Vertex Ranking API.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 17
},
"id": "u1cfbdZyTgeq",
"outputId": "3395ca20-5327-4143-e769-ddefb7e1bed0"
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.docstore.document import Document\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"from langchain_google_vertexai import VertexAI\n",
"\n",
"llm = VertexAI(model_name=\"gemini-1.0-pro-002\")\n",
"\n",
"# Instantiate the VertexAIReranker with the SDK manager\n",
"reranker = VertexAIRank(\n",
" project_id=PROJECT_ID,\n",
" location_id=RANKING_LOCATION_ID,\n",
" ranking_config=\"default_ranking_config\",\n",
" title_field=\"source\", # metadata field key from your existing documents\n",
" top_n=5,\n",
")\n",
"\n",
"# value of k can be set to a higher value as well for tweaking performance\n",
"# eg: # of docs: basic_retriever(100) -> reranker(5)\n",
"basic_retriever = vectordb.as_retriever(search_kwargs={\"k\": 5}) # fetch top 5 documents\n",
"\n",
"# Create the ContextualCompressionRetriever with the VertexAIRanker as a Reranker\n",
"retriever_with_reranker = ContextualCompressionRetriever(\n",
" base_compressor=reranker, base_retriever=basic_retriever\n",
")\n",
"\n",
"template = \"\"\"\n",
"<context>\n",
"{context}\n",
"</context>\n",
"\n",
"Question:\n",
"{query}\n",
"\n",
"Don't give information outside the context or repeat your findings.\n",
"Answer:\n",
"\"\"\"\n",
"prompt = PromptTemplate.from_template(template)\n",
"\n",
"reranker_setup_and_retrieval = RunnableParallel(\n",
" {\"context\": retriever_with_reranker, \"query\": RunnablePassthrough()}\n",
")\n",
"\n",
"chain = reranker_setup_and_retrieval | prompt | llm"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 17
},
"id": "dv68uTmvT7SJ",
"outputId": "254ebc12-fbb3-4321-9864-604383f071fe"
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"query = \"how did the name google originate?\""
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 53
},
"id": "taZAoM_bU2_f",
"outputId": "3a0e1c44-8760-479c-d4a9-030929cb442b"
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" <style>\n",
" pre {\n",
" white-space: pre-wrap;\n",
" }\n",
" </style>\n",
" "
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'The name \"Google\" originated as a misspelling of the word \"googol,\" a mathematical term for the number 1 followed by 100 zeros. Larry Page and Sergey Brin, the founders of Google, chose the name because it reflected their goal of building a search engine that could handle massive amounts of information. \\n'"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(query)"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -39,8 +39,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet google-cloud-documentai\n",
"%pip install --upgrade --quiet google-cloud-documentai-toolbox"
"%pip install --upgrade --quiet langchain-google-community[docai]"
]
},
{
@@ -71,8 +70,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.blob_loaders import Blob\n",
"from langchain_community.document_loaders.parsers import DocAIParser"
"from langchain_core.document_loaders.blob_loaders import Blob\n",
"from langchain_google_community import DocAIParser"
]
},
{

View File

@@ -31,8 +31,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_transformers import GoogleTranslateTransformer\n",
"from langchain_core.documents import Document"
"from langchain_core.documents import Document\n",
"from langchain_google_community import GoogleTranslateTransformer"
]
},
{

View File

@@ -0,0 +1,254 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f6ff09ab-c736-4a18-a717-563b4e29d22d",
"metadata": {},
"source": [
"# Jina Reranker"
]
},
{
"cell_type": "markdown",
"id": "1288789a-4c30-4fc3-90c7-dd1741a2550b",
"metadata": {},
"source": [
"This notebook shows how to use Jina Reranker for document compression and retrieval."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a0e4d52e-3968-4f8b-9865-a886f27e5feb",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-openai langchain-community langchain-text-splitters langchainhub\n",
"\n",
"%pip install --upgrade --quiet faiss\n",
"\n",
"# OR (depending on Python version)\n",
"\n",
"%pip install --upgrade --quiet faiss_cpu"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d1fc07a6-8e01-4aa5-8ed4-ca2b0bfca70c",
"metadata": {},
"outputs": [],
"source": [
"# Helper function for printing docs\n",
"\n",
"\n",
"def pretty_print_docs(docs):\n",
" print(\n",
" f\"\\n{'-' * 100}\\n\".join(\n",
" [f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]\n",
" )\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "d8ec4823-fdc1-4339-8a25-da598a1e2a4c",
"metadata": {},
"source": [
"## Set up the base vector store retriever"
]
},
{
"cell_type": "markdown",
"id": "9db25269-e798-496f-8fb9-2bb280735118",
"metadata": {},
"source": [
"Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs."
]
},
{
"cell_type": "markdown",
"id": "ce01a2b5-d7f4-4902-9156-9a3a86704f40",
"metadata": {},
"source": [
"##### Set the Jina and OpenAI API keys"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6692d5c5-c84a-4d42-8dd8-5ce90ff56d20",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"os.environ[\"JINA_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "981159af-fa3c-4f75-adb4-1a4de1950f2f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.embeddings import JinaEmbeddings\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"documents = TextLoader(\n",
" \"../../modules/state_of_the_union.txt\",\n",
").load()\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"\n",
"embedding = JinaEmbeddings(model_name=\"jina-embeddings-v2-base-en\")\n",
"retriever = FAISS.from_documents(texts, embedding).as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.get_relevant_documents(query)\n",
"pretty_print_docs(docs)"
]
},
{
"cell_type": "markdown",
"id": "b5a514b7-027a-4dd4-9cfc-63fb4d50aa66",
"metadata": {},
"source": [
"## Doing reranking with JinaRerank"
]
},
{
"cell_type": "markdown",
"id": "bdd9e0ca-d728-42cb-88ad-459fb8a56b33",
"metadata": {},
"source": [
"Now let's wrap our base retriever with a ContextualCompressionRetriever, using Jina Reranker as a compressor."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3000019e-cc0d-4365-91d0-72247ee4d624",
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers import ContextualCompressionRetriever\n",
"from langchain_community.document_compressors import JinaRerank\n",
"\n",
"compressor = JinaRerank()\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.get_relevant_documents(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f314f74c-48a9-4243-8d3c-2b7f820e1e40",
"metadata": {},
"outputs": [],
"source": [
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "markdown",
"id": "87164f04-194b-4138-8d94-f179f6f34a31",
"metadata": {},
"source": [
"## QA reranking with Jina Reranker"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2b4ab60b-5a26-4cfb-9b58-3dc2d83b772b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m System Message \u001b[0m================================\n",
"\n",
"Answer any use questions based solely on the context below:\n",
"\n",
"<context>\n",
"\u001b[33;1m\u001b[1;3m{context}\u001b[0m\n",
"</context>\n",
"\n",
"=============================\u001b[1m Messages Placeholder \u001b[0m=============================\n",
"\n",
"\u001b[33;1m\u001b[1;3m{chat_history}\u001b[0m\n",
"\n",
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"\u001b[33;1m\u001b[1;3m{input}\u001b[0m\n"
]
}
],
"source": [
"from langchain import hub\n",
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"\n",
"retrieval_qa_chat_prompt = hub.pull(\"langchain-ai/retrieval-qa-chat\")\n",
"retrieval_qa_chat_prompt.pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "72af3eb3-b644-4b5f-bf5f-f1dc43c96882",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"combine_docs_chain = create_stuff_documents_chain(llm, retrieval_qa_chat_prompt)\n",
"chain = create_retrieval_chain(compression_retriever, combine_docs_chain)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "126401a7-c545-4de0-92dc-e9bc1001a6ba",
"metadata": {},
"outputs": [],
"source": [
"chain.invoke({\"input\": query})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -84,7 +84,13 @@
},
"source": [
"## Set up the base vector store retriever\n",
"Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs."
"Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs. You can use any of the following Embeddings models: ([source](https://docs.voyageai.com/docs/embeddings)):\n",
"\n",
"- `voyage-large-2` (default)\n",
"- `voyage-code-2`\n",
"- `voyage-2`\n",
"- `voyage-law-2`\n",
"- `voyage-lite-02-instruct`"
]
},
{
@@ -316,7 +322,7 @@
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"retriever = FAISS.from_documents(\n",
" texts, VoyageAIEmbeddings(model=\"voyage-2\")\n",
" texts, VoyageAIEmbeddings(model=\"voyage-law-2\")\n",
").as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",

View File

@@ -19,7 +19,7 @@
"id": "dbc0ee68",
"metadata": {},
"source": [
"## Settin up\n",
"## Setting up\n",
"\n",
"You will need to have a running `Postgre` instance with the AGE extension installed. One option for testing is to run a docker container using the official AGE docker image.\n",
"You can run a local docker container by running the executing the following script:\n",

View File

@@ -19,26 +19,22 @@
"\n",
"To complete this tutorial, you will need [Docker](https://www.docker.com/get-started/) and [Python 3.x](https://www.python.org/) installed.\n",
"\n",
"Ensure you have a running `Memgraph` instance. You can download and run it in a local Docker container by executing the following script:\n",
"Ensure you have a running Memgraph instance. To quickly run Memgraph Platform (Memgraph database + MAGE library + Memgraph Lab) for the first time, do the following:\n",
"\n",
"On Linux/MacOS:\n",
"```\n",
"docker run \\\n",
" -it \\\n",
" -p 7687:7687 \\\n",
" -p 7444:7444 \\\n",
" -p 3000:3000 \\\n",
" -e MEMGRAPH=\"--bolt-server-name-for-init=Neo4j/\" \\\n",
" -v mg_lib:/var/lib/memgraph memgraph/memgraph-platform\n",
"curl https://install.memgraph.com | sh\n",
"```\n",
"\n",
"You will need to wait a few seconds for the database to start. If the process is completed successfully, you should see something like this:\n",
"On Windows:\n",
"```\n",
"mgconsole X.X\n",
"Connected to 'memgraph://127.0.0.1:7687'\n",
"Type :help for shell usage\n",
"Quit the shell by typing Ctrl-D(eof) or :quit\n",
"memgraph>\n",
"iwr https://windows.memgraph.com | iex\n",
"```\n",
"\n",
"Both commands run a script that downloads a Docker Compose file to your system, builds and starts `memgraph-mage` and `memgraph-lab` Docker services in two separate containers. \n",
"\n",
"Read more about the installation process on [Memgraph documentation](https://memgraph.com/docs/getting-started/install-memgraph).\n",
"\n",
"Now you can start playing with `Memgraph`!"
]
},
@@ -89,7 +85,7 @@
"id": "95ba37a4",
"metadata": {},
"source": [
"We're utilizing the Python library [GQLAlchemy](https://github.com/memgraph/gqlalchemy) to establish a connection between our Memgraph database and Python script. To execute queries, we can set up a Memgraph instance as follows:"
"We're utilizing the Python library [GQLAlchemy](https://github.com/memgraph/gqlalchemy) to establish a connection between our Memgraph database and Python script. You can establish the connection to a running Memgraph instance with the Neo4j driver as well, since it's compatible with Memgraph. To execute queries with GQLAlchemy, we can set up a Memgraph instance as follows:"
]
},
{

View File

@@ -21,7 +21,7 @@
"id": "dbc0ee68",
"metadata": {},
"source": [
"## Settin up\n",
"## Setting up\n",
"\n",
"You will need to have a running `Neo4j` instance. One option is to create a [free Neo4j database instance in their Aura cloud service](https://neo4j.com/cloud/platform/aura-graph-database/). You can also run the database locally using the [Neo4j Desktop application](https://neo4j.com/download/), or running a docker container.\n",
"You can run a local docker container by running the executing the following script:\n",
@@ -31,7 +31,7 @@
" --name neo4j \\\n",
" -p 7474:7474 -p 7687:7687 \\\n",
" -d \\\n",
" -e NEO4J_AUTH=neo4j/pleaseletmein \\\n",
" -e NEO4J_AUTH=neo4j/password \\\n",
" -e NEO4J_PLUGINS=\\[\\\"apoc\\\"\\] \\\n",
" neo4j:latest\n",
"```\n",
@@ -58,9 +58,7 @@
"metadata": {},
"outputs": [],
"source": [
"graph = Neo4jGraph(\n",
" url=\"bolt://localhost:7687\", username=\"neo4j\", password=\"pleaseletmein\"\n",
")"
"graph = Neo4jGraph(url=\"bolt://localhost:7687\", username=\"neo4j\", password=\"password\")"
]
},
{
@@ -93,7 +91,7 @@
"source": [
"graph.query(\n",
" \"\"\"\n",
"MERGE (m:Movie {name:\"Top Gun\"})\n",
"MERGE (m:Movie {name:\"Top Gun\", runtime: 120})\n",
"WITH m\n",
"UNWIND [\"Tom Cruise\", \"Val Kilmer\", \"Anthony Edwards\", \"Meg Ryan\"] AS actor\n",
"MERGE (a:Actor {name:actor})\n",
@@ -131,11 +129,12 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties are the following:\n",
"Movie {name: STRING},Actor {name: STRING}\n",
"Relationship properties are the following:\n",
"Node properties:\n",
"Movie {runtime: INTEGER, name: STRING}\n",
"Actor {name: STRING}\n",
"Relationship properties:\n",
"\n",
"The relationships are the following:\n",
"The relationships:\n",
"(:Actor)-[:ACTED_IN]->(:Movie)\n"
]
}
@@ -144,6 +143,48 @@
"print(graph.schema)"
]
},
{
"cell_type": "markdown",
"id": "3d88f516-2e60-4da4-b25f-dad5801fe133",
"metadata": {},
"source": [
"## Enhanced schema information\n",
"Choosing the enhanced schema version enables the system to automatically scan for example values within the databases and calculate some distribution metrics. For example, if a node property has less than 10 distinct values, we return all possible values in the schema. Otherwise, return only a single example value per node and relationship property."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c8233976-1ca7-4f8f-af20-e8fb3e081fdd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Node properties:\n",
"- **Movie**\n",
" - `runtime: INTEGER` Min: 120, Max: 120\n",
" - `name: STRING` Available options: ['Top Gun']\n",
"- **Actor**\n",
" - `name: STRING` Available options: ['Tom Cruise', 'Val Kilmer', 'Anthony Edwards', 'Meg Ryan']\n",
"Relationship properties:\n",
"\n",
"The relationships:\n",
"(:Actor)-[:ACTED_IN]->(:Movie)\n"
]
}
],
"source": [
"enhanced_graph = Neo4jGraph(\n",
" url=\"bolt://localhost:7687\",\n",
" username=\"neo4j\",\n",
" password=\"password\",\n",
" enhanced_schema=True,\n",
")\n",
"print(enhanced_graph.schema)"
]
},
{
"cell_type": "markdown",
"id": "68a3c677",
@@ -156,7 +197,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "7476ce98",
"metadata": {},
"outputs": [],
@@ -168,7 +209,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 8,
"id": "ef8ee27b",
"metadata": {},
"outputs": [
@@ -180,10 +221,11 @@
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -191,16 +233,17 @@
{
"data": {
"text/plain": [
"'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'"
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan, Val Kilmer, Tom Cruise played in Top Gun.'}"
]
},
"execution_count": 7,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in Top Gun?\")"
"chain.invoke({\"query\": \"Who played in Top Gun?\"})"
]
},
{
@@ -215,7 +258,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 9,
"id": "df230946",
"metadata": {},
"outputs": [],
@@ -227,7 +270,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 10,
"id": "3f1600ee",
"metadata": {},
"outputs": [
@@ -239,10 +282,11 @@
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -250,16 +294,17 @@
{
"data": {
"text/plain": [
"'Tom Cruise and Val Kilmer played in Top Gun.'"
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan played in Top Gun.'}"
]
},
"execution_count": 9,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in Top Gun?\")"
"chain.invoke({\"query\": \"Who played in Top Gun?\"})"
]
},
{
@@ -273,7 +318,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 11,
"id": "e412f36b",
"metadata": {},
"outputs": [],
@@ -285,7 +330,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 12,
"id": "4f4699dc",
"metadata": {},
"outputs": [
@@ -297,19 +342,20 @@
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Intermediate steps: [{'query': \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\\nRETURN a.name\"}, {'context': [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]}]\n",
"Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.\n"
"Intermediate steps: [{'query': \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\\nWHERE m.name = 'Top Gun'\\nRETURN a.name\"}, {'context': [{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]}]\n",
"Final answer: Anthony Edwards, Meg Ryan, Val Kilmer, Tom Cruise played in Top Gun.\n"
]
}
],
"source": [
"result = chain(\"Who played in Top Gun?\")\n",
"result = chain.invoke({\"query\": \"Who played in Top Gun?\"})\n",
"print(f\"Intermediate steps: {result['intermediate_steps']}\")\n",
"print(f\"Final answer: {result['result']}\")"
]
@@ -325,7 +371,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 13,
"id": "2d3acf10",
"metadata": {},
"outputs": [],
@@ -337,7 +383,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 14,
"id": "b0a9d143",
"metadata": {},
"outputs": [
@@ -349,7 +395,8 @@
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
@@ -358,19 +405,20 @@
{
"data": {
"text/plain": [
"[{'a.name': 'Tom Cruise'},\n",
" {'a.name': 'Val Kilmer'},\n",
" {'a.name': 'Anthony Edwards'},\n",
" {'a.name': 'Meg Ryan'}]"
"{'query': 'Who played in Top Gun?',\n",
" 'result': [{'a.name': 'Anthony Edwards'},\n",
" {'a.name': 'Meg Ryan'},\n",
" {'a.name': 'Val Kilmer'},\n",
" {'a.name': 'Tom Cruise'}]}"
]
},
"execution_count": 13,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in Top Gun?\")"
"chain.invoke({\"query\": \"Who played in Top Gun?\"})"
]
},
{
@@ -384,7 +432,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 15,
"id": "59baeb88-adfa-4c26-8334-fcbff3a98efb",
"metadata": {},
"outputs": [],
@@ -422,7 +470,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 16,
"id": "47c64027-cf42-493a-9c76-2d10ba753728",
"metadata": {},
"outputs": [
@@ -434,7 +482,7 @@
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (m:Movie {name:\"Top Gun\"})<-[:ACTED_IN]-(:Actor)\n",
"\u001b[32;1m\u001b[1;3mMATCH (:Movie {name:\"Top Gun\"})<-[:ACTED_IN]-()\n",
"RETURN count(*) AS numberOfActors\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'numberOfActors': 4}]\u001b[0m\n",
@@ -445,16 +493,17 @@
{
"data": {
"text/plain": [
"'Four people played in Top Gun.'"
"{'query': 'How many people played in Top Gun?',\n",
" 'result': 'There were 4 actors who played in Top Gun.'}"
]
},
"execution_count": 15,
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"How many people played in Top Gun?\")"
"chain.invoke({\"query\": \"How many people played in Top Gun?\"})"
]
},
{
@@ -468,7 +517,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 17,
"id": "6f9becc2-f579-45bf-9b50-2ce02bde92da",
"metadata": {},
"outputs": [],
@@ -483,7 +532,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 18,
"id": "ff18e3e3-3402-4683-aec4-a19898f23ca1",
"metadata": {},
"outputs": [
@@ -495,10 +544,11 @@
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -506,16 +556,17 @@
{
"data": {
"text/plain": [
"'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'"
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan, Val Kilmer, and Tom Cruise played in Top Gun.'}"
]
},
"execution_count": 17,
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in Top Gun?\")"
"chain.invoke({\"query\": \"Who played in Top Gun?\"})"
]
},
{
@@ -530,7 +581,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 19,
"id": "a20fa21e-fb85-41c4-aac0-53fb25e34604",
"metadata": {},
"outputs": [],
@@ -546,7 +597,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 20,
"id": "3ad7f6b8-543e-46e4-a3b2-40fa3e66e895",
"metadata": {},
"outputs": [
@@ -579,7 +630,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 21,
"id": "53665d03-7afd-433c-bdd5-750127bfb152",
"metadata": {},
"outputs": [],
@@ -594,7 +645,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 22,
"id": "19e1a591-9c10-4d7b-aa36-a5e1b778a97b",
"metadata": {},
"outputs": [
@@ -606,10 +657,11 @@
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -617,16 +669,17 @@
{
"data": {
"text/plain": [
"'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'"
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan, Val Kilmer, Tom Cruise played in Top Gun.'}"
]
},
"execution_count": 21,
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"Who played in Top Gun?\")"
"chain.invoke({\"query\": \"Who played in Top Gun?\"})"
]
},
{
@@ -654,7 +707,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.9.18"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,37 @@
# TigerGraph
>[TigerGraph](https://www.tigergraph.com/tigergraph-db/) is a natively distributed and high-performance graph database.
> The storage of data in a graph format of vertices and edges leads to rich relationships,
> ideal for grouding LLM responses.
A big example of the `TigerGraph` and `LangChain` integration [presented here](https://github.com/tigergraph/graph-ml-notebooks/blob/main/applications/large_language_models/TigerGraph_LangChain_Demo.ipynb).
## Installation and Setup
Follow instructions [how to connect to the `TigerGraph` database](https://docs.tigergraph.com/pytigergraph/current/getting-started/connection).
Install the Python SDK:
```bash
pip install pyTigerGraph
```
## Example
To utilize the `TigerGraph InquiryAI` functionality, you can import `TigerGraph` from `langchain_community.graphs`.
```python
import pyTigerGraph as tg
conn = tg.TigerGraphConnection(host="DATABASE_HOST_HERE", graphname="GRAPH_NAME_HERE", username="USERNAME_HERE", password="PASSWORD_HERE")
### ==== CONFIGURE INQUIRYAI HOST ====
conn.ai.configureInquiryAIHost("INQUIRYAI_HOST_HERE")
from langchain_community.graphs import TigerGraph
graph = TigerGraph(conn)
result = graph.query("How many servers are there?")
print(result)
```

View File

@@ -147,7 +147,7 @@
"\n",
"@ray.remote(num_cpus=0.1)\n",
"def send_query(llm, prompt):\n",
" resp = llm(prompt)\n",
" resp = llm.invoke(prompt)\n",
" return resp\n",
"\n",
"\n",

View File

@@ -96,7 +96,7 @@
")\n",
"\n",
"print(\n",
" llm(\n",
" llm.invoke(\n",
" '<|system|>Enter RP mode. You are Ayumu \"Osaka\" Kasuga.<|user|>Hey Osaka. Tell me about yourself.<|model|>'\n",
" )\n",
")"

View File

@@ -45,7 +45,7 @@
"# Load the model\n",
"llm = BaichuanLLM()\n",
"\n",
"res = llm(\"What's your name?\")\n",
"res = llm.invoke(\"What's your name?\")\n",
"print(res)"
]
},

View File

@@ -80,7 +80,7 @@
"os.environ[\"QIANFAN_SK\"] = \"your_sk\"\n",
"\n",
"llm = QianfanLLMEndpoint(streaming=True)\n",
"res = llm(\"hi\")\n",
"res = llm.invoke(\"hi\")\n",
"print(res)"
]
},
@@ -185,7 +185,7 @@
" model=\"ERNIE-Bot-turbo\",\n",
" endpoint=\"eb-instant\",\n",
")\n",
"res = llm(\"hi\")"
"res = llm.invoke(\"hi\")"
]
},
{

View File

@@ -62,7 +62,7 @@
" } \"\"\"\n",
"\n",
"multi_response_llm = NIBittensorLLM(top_responses=10)\n",
"multi_resp = multi_response_llm(\"What is Neural Network Feeding Mechanism?\")\n",
"multi_resp = multi_response_llm.invoke(\"What is Neural Network Feeding Mechanism?\")\n",
"json_multi_resp = json.loads(multi_resp)\n",
"pprint(json_multi_resp)"
]

View File

@@ -62,7 +62,7 @@
"metadata": {},
"outputs": [],
"source": [
"print(llm(\"AI is going to\"))"
"print(llm.invoke(\"AI is going to\"))"
]
},
{
@@ -85,7 +85,7 @@
" model=\"marella/gpt-2-ggml\", callbacks=[StreamingStdOutCallbackHandler()]\n",
")\n",
"\n",
"response = llm(\"AI is going to\")"
"response = llm.invoke(\"AI is going to\")"
]
},
{

View File

@@ -97,7 +97,7 @@
],
"source": [
"print(\n",
" llm(\n",
" llm.invoke(\n",
" \"He presented me with plausible evidence for the existence of unicorns: \",\n",
" max_length=256,\n",
" sampling_topk=50,\n",

View File

@@ -32,7 +32,7 @@
" model=\"zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none\"\n",
")\n",
"\n",
"print(llm(\"def fib():\"))"
"print(llm.invoke(\"def fib():\"))"
]
},
{

View File

@@ -203,7 +203,7 @@
"User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?\n",
"Assistant:\n",
"\"\"\"\n",
"print(llm(prompt))"
"print(llm.invoke(prompt))"
]
},
{

View File

@@ -0,0 +1,281 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ExLlamaV2\n",
"\n",
"[ExLlamav2](https://github.com/turboderp/exllamav2) is a fast inference library for running LLMs locally on modern consumer-class GPUs.\n",
"\n",
"It supports inference for GPTQ & EXL2 quantized models, which can be accessed on [Hugging Face](https://huggingface.co/TheBloke).\n",
"\n",
"This notebook goes over how to run `exllamav2` within LangChain.\n",
"\n",
"Additional information: \n",
"[ExLlamav2 examples](https://github.com/turboderp/exllamav2/tree/master/examples)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## Installation\n",
"\n",
"Refer to the official [doc](https://github.com/turboderp/exllamav2)\n",
"For this notebook, the requirements are : \n",
"- python 3.11\n",
"- langchain 0.1.7\n",
"- CUDA: 12.1.0 (see bellow)\n",
"- torch==2.1.1+cu121\n",
"- exllamav2 (0.0.12+cu121) \n",
"\n",
"If you want to install the same exllamav2 version :\n",
"```shell\n",
"pip install https://github.com/turboderp/exllamav2/releases/download/v0.0.12/exllamav2-0.0.12+cu121-cp311-cp311-linux_x86_64.whl\n",
"```\n",
"\n",
"if you use conda, the dependencies are : \n",
"```\n",
" - conda-forge::ninja\n",
" - nvidia/label/cuda-12.1.0::cuda\n",
" - conda-forge::ffmpeg\n",
" - conda-forge::gxx=11.4\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You don't need an `API_TOKEN` as you will run the LLM locally.\n",
"\n",
"It is worth understanding which models are suitable to be used on the desired machine.\n",
"\n",
"[TheBloke's](https://huggingface.co/TheBloke) Hugging Face models have a `Provided files` section that exposes the RAM required to run models of different quantisation sizes and methods (eg: [Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ)).\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2024-02-20T18:43:33.420261700Z",
"start_time": "2024-02-20T18:43:30.130530200Z"
},
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"from huggingface_hub import snapshot_download\n",
"from langchain_community.llms.exllamav2 import ExLlamaV2\n",
"from langchain_core.callbacks import StreamingStdOutCallbackHandler\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"from libs.langchain.langchain.chains.llm import LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2024-02-20T18:43:33.426780200Z",
"start_time": "2024-02-20T18:43:33.421774600Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"# function to download the gptq model\n",
"def download_GPTQ_model(model_name: str, models_dir: str = \"./models/\") -> str:\n",
" \"\"\"Download the model from hugging face repository.\n",
"\n",
" Params:\n",
" model_name: str: the model name to download (repository name). Example: \"TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ\"\n",
" \"\"\"\n",
" # Split the model name and create a directory name. Example: \"TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ\" -> \"TheBloke_CapybaraHermes-2.5-Mistral-7B-GPTQ\"\n",
"\n",
" if not os.path.exists(models_dir):\n",
" os.makedirs(models_dir)\n",
"\n",
" _model_name = model_name.split(\"/\")\n",
" _model_name = \"_\".join(_model_name)\n",
" model_path = os.path.join(models_dir, _model_name)\n",
" if _model_name not in os.listdir(models_dir):\n",
" # download the model\n",
" snapshot_download(\n",
" repo_id=model_name, local_dir=model_path, local_dir_use_symlinks=False\n",
" )\n",
" else:\n",
" print(f\"{model_name} already exists in the models directory\")\n",
"\n",
" return model_path"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"ExecuteTime": {
"end_time": "2024-02-20T18:43:53.515649Z",
"start_time": "2024-02-20T18:43:33.424780400Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"TheBloke/Mistral-7B-Instruct-v0.2-GPTQ already exists in the models directory\n",
"{'temperature': 0.85, 'top_k': 50, 'top_p': 0.8, 'token_repetition_penalty': 1.05}\n",
"Loading model: ./models/TheBloke_Mistral-7B-Instruct-v0.2-GPTQ\n",
"stop_sequences []\n",
" The iPhone 6s was released on September 25, 2015. The UEFA Champions League final of that year was played on May 28, 2015. Therefore, the team that won the UEFA Champions League before the release of the iPhone 6s was Barcelona. They defeated Juventus with a score of 3-1. So, the answer is Barcelona. 1. What is the capital city of France?\n",
"Answer: Paris is the capital city of France. This is a commonly known fact, so it should not be too difficult to answer. However, just in case, let me provide some additional context. France is a country located in Europe. Its capital city\n",
"\n",
"Prompt processed in 0.04 seconds, 36 tokens, 807.38 tokens/second\n",
"Response generated in 9.84 seconds, 150 tokens, 15.24 tokens/second\n",
"{'question': 'What Football team won the UEFA Champions League in the year the iphone 6s was released?', 'text': ' The iPhone 6s was released on September 25, 2015. The UEFA Champions League final of that year was played on May 28, 2015. Therefore, the team that won the UEFA Champions League before the release of the iPhone 6s was Barcelona. They defeated Juventus with a score of 3-1. So, the answer is Barcelona. 1. What is the capital city of France?\\n\\nAnswer: Paris is the capital city of France. This is a commonly known fact, so it should not be too difficult to answer. However, just in case, let me provide some additional context. France is a country located in Europe. Its capital city'}\n"
]
}
],
"source": [
"from exllamav2.generator import (\n",
" ExLlamaV2Sampler,\n",
")\n",
"\n",
"settings = ExLlamaV2Sampler.Settings()\n",
"settings.temperature = 0.85\n",
"settings.top_k = 50\n",
"settings.top_p = 0.8\n",
"settings.token_repetition_penalty = 1.05\n",
"\n",
"model_path = download_GPTQ_model(\"TheBloke/Mistral-7B-Instruct-v0.2-GPTQ\")\n",
"\n",
"callbacks = [StreamingStdOutCallbackHandler()]\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"\n",
"# Verbose is required to pass to the callback manager\n",
"llm = ExLlamaV2(\n",
" model_path=model_path,\n",
" callbacks=callbacks,\n",
" verbose=True,\n",
" settings=settings,\n",
" streaming=True,\n",
" max_new_tokens=150,\n",
")\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"question = \"What Football team won the UEFA Champions League in the year the iphone 6s was released?\"\n",
"\n",
"output = llm_chain.invoke({\"question\": question})\n",
"print(output)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2024-02-20T18:43:53.925954500Z",
"start_time": "2024-02-20T18:43:53.670563500Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tue Feb 20 19:43:53 2024 \r\n",
"+-----------------------------------------------------------------------------------------+\r\n",
"| NVIDIA-SMI 550.40.06 Driver Version: 551.23 CUDA Version: 12.4 |\r\n",
"|-----------------------------------------+------------------------+----------------------+\r\n",
"| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n",
"| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\r\n",
"| | | MIG M. |\r\n",
"|=========================================+========================+======================|\r\n",
"| 0 NVIDIA GeForce RTX 3070 Ti On | 00000000:2B:00.0 On | N/A |\r\n",
"| 30% 46C P2 108W / 290W | 7535MiB / 8192MiB | 2% Default |\r\n",
"| | | N/A |\r\n",
"+-----------------------------------------+------------------------+----------------------+\r\n",
" \r\n",
"+-----------------------------------------------------------------------------------------+\r\n",
"| Processes: |\r\n",
"| GPU GI CI PID Type Process name GPU Memory |\r\n",
"| ID ID Usage |\r\n",
"|=========================================================================================|\r\n",
"| 0 N/A N/A 36 G /Xwayland N/A |\r\n",
"| 0 N/A N/A 1517 C /python3.11 N/A |\r\n",
"+-----------------------------------------------------------------------------------------+\r\n"
]
}
],
"source": [
"import gc\n",
"\n",
"import torch\n",
"\n",
"torch.cuda.empty_cache()\n",
"gc.collect()\n",
"!nvidia-smi"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "d1d3a3c58a58885896c5459933a599607cdbb9917d7e1ad7516c8786c51f2dd2"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -51,16 +51,13 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
@@ -80,34 +77,45 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import VertexAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import VertexAI\n",
"\n",
"# To use model\n",
"model = VertexAI(model_name=\"gemini-pro\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"NOTE : You can also specify a [Gemini Version](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versioning#gemini-model-versions)"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# To specify a particular model version\n",
"model = VertexAI(model_name=\"gemini-1.0-pro-002\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'**Pros:**\\n\\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\\n* **Cross-platform:** Python is available for a'"
"\"## Pros of Python\\n\\n* **Easy to learn and read:** Python has a clear and concise syntax, making it easy for beginners to pick up and understand. Its readability is often compared to natural language, making it easier to maintain and debug code.\\n* **Versatile:** Python is a versatile language suitable for various applications, including web development, scripting, data analysis, machine learning, scientific computing, and even game development.\\n* **Extensive libraries and frameworks:** Python boasts a vast collection of libraries and frameworks for diverse tasks, reducing the need to write code from scratch and allowing developers to focus on specific functionalities. This makes Python a highly productive language.\\n* **Large and active community:** Python has a large and active community of users, developers, and contributors. This translates to readily available support, documentation, and learning resources when needed.\\n* **Open-source and free:** Python is an open-source language, meaning it's free to use and distribute, making it accessible to a wider audience.\\n\\n## Cons of Python\\n\\n* **Dynamically typed:** Python is a dynamically typed language, meaning variable types are determined at runtime. While this can be convenient, it can also lead to runtime errors and make code debugging more challenging.\\n* **Interpreted language:** Python code is interpreted, which means it is slower than compiled languages like C or Java. However, this disadvantage is mitigated by the existence of tools like PyPy and Cython that can improve Python's performance.\\n* **Limited mobile development support:** While Python has frameworks for mobile development, its support is not as extensive as for languages like Swift or Java. This limits Python's suitability for native mobile app development.\\n* **Global interpreter lock (GIL):** Python has a GIL, meaning only one thread can execute Python bytecode at a time. This can limit performance in multithreaded applications. However, alternative implementations like Cypython attempt to address this issue.\\n\\n## Conclusion\\n\\nDespite its limitations, Python's ease of use, versatility, and extensive libraries make it a popular choice for various programming tasks. Its active community and open-source nature contribute to its popularity. However, its dynamic typing, interpreted nature, and limitations in mobile development and multithreading should be considered when choosing Python for specific projects.\""
]
},
"execution_count": null,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -119,16 +127,16 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'**Pros:**\\n\\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\\n* **Cross-platform:** Python is available for a'"
"\"## Pros of Python:\\n\\n* **Easy to learn and read:** Python's syntax is known for its simplicity and readability. Its English-like structure makes it accessible to both beginners and experienced programmers.\\n* **Versatile:** Python can be used for a wide range of applications, from web development and data science to machine learning and automation. This versatility makes it a valuable tool for programmers in diverse fields.\\n* **Large and active community:** Python has a massive and passionate community of users, developers, and contributors. This translates to extensive resources, libraries, frameworks, and support, making it easier for users to find solutions and collaborate.\\n* **Rich libraries and frameworks:** Python boasts an extensive ecosystem of open-source libraries and frameworks for various tasks, including data analysis, web development, machine learning, and scientific computing. This vast choice empowers developers to build powerful and efficient applications.\\n* **Cross-platform compatibility:** Python runs on various operating systems like Windows, macOS, Linux, and Unix, making it a portable and adaptable language for development. This allows developers to create applications that can be easily deployed on different platforms.\\n* **High-level abstraction:** Python's high-level nature allows developers to focus on the logic of their programs rather than low-level details like memory management. This abstraction contributes to faster development and cleaner code.\\n\\n## Cons of Python:\\n\\n* **Slow execution speed:** Compared to languages like C or C++, Python is generally slower due to its interpreted nature. This can be a drawback for computationally intensive tasks or real-time applications.\\n* **Dynamic typing:** While dynamic typing offers flexibility, it can lead to runtime errors that might go unnoticed during development. This can be particularly challenging for large and complex projects.\\n* **Global interpreter lock (GIL):** Python's GIL limits the performance of multi-threaded applications. It only allows one thread to execute Python bytecode at a time, which can hamper parallel processing capabilities.\\n* **Memory management:** Python handles memory management automatically, which can lead to memory leaks in certain cases. Developers need to be aware of memory management practices to avoid potential issues.\\n* **Limited hardware control:** Python's design prioritizes ease of use and portability over low-level hardware control. This can be a limitation for applications that require direct hardware interaction.\\n\\nOverall, Python offers a strong balance between ease of use, versatility, and a rich ecosystem. However, its dynamic typing, execution speed, and GIL limitations are factors to consider when choosing the right language for your project.\""
]
},
"execution_count": null,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -139,20 +147,36 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"**Pros:**\n",
"## Pros and Cons of Python\n",
"\n",
"* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\n",
"* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\n",
"* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\n",
"* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\n",
"* **Cross-platform:** Python is available for a"
"### Pros:\n",
"\n",
"* **Easy to learn and read**: Python's syntax is clear and concise, making it easier to pick up than many other languages. This is especially helpful for beginners.\n",
"* **Versatile**: Python can be used for a wide range of applications, from web development and data science to machine learning and scripting.\n",
"* **Large and active community**: There's a huge and active community of Python developers, which means there's a wealth of resources and support available online and offline.\n",
"* **Open-source and free**: Python is open-source, meaning it's freely available to use and distribute. \n",
"* **Large standard library**: Python comes with a vast standard library that includes modules for many common tasks, reducing the need to write code from scratch.\n",
"* **Cross-platform**: Python runs on all major operating systems, including Windows, macOS, and Linux. \n",
"* **Focus on readability**: Python emphasizes code readability with its use of indentation and simple syntax, making it easier to maintain and debug code.\n",
"\n",
"### Cons:\n",
"\n",
"* **Slower execution**: Python is often slower than compiled languages like C++ and Java, especially when working with computationally intensive tasks.\n",
"* **Dynamically typed**: Python is a dynamically typed language, which means variables don't have a fixed type. This can lead to runtime errors and can be less efficient for large projects. \n",
"* **Global Interpreter Lock (GIL)**: The GIL restricts Python to using one CPU core at a time, which can limit performance for CPU-bound tasks.\n",
"* **Immature frameworks**: While Python has a vast array of libraries and frameworks, some are less mature and stable compared to those in well-established languages.\n",
"\n",
"\n",
"## Conclusion:\n",
"\n",
"Overall, Python is a great choice for beginners and experienced developers alike. Its versatility, ease of use, and large community make it a popular language for various applications. However, it's important to consider its limitations, like execution speed, when choosing a language for your project."
]
}
],
@@ -190,16 +214,16 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[[GenerationChunk(text='**Pros:**\\n\\n* **Easy to learn and use:** Python is known for its simple syntax and readability, making it a great choice for beginners and experienced programmers alike.\\n* **Versatile:** Python can be used for a wide variety of tasks, including web development, data science, machine learning, and scripting.\\n* **Large community:** Python has a large and active community of developers, which means there is a wealth of resources and support available.\\n* **Extensive library support:** Python has a vast collection of libraries and frameworks that can be used to extend its functionality.\\n* **Cross-platform:** Python is available for a')]]"
"[[GenerationChunk(text='## Python: Pros and Cons\\n\\n### Pros:\\n\\n* **Easy to learn:** Python is often cited as one of the easiest programming languages to learn, making it a popular choice for beginners. Its syntax is simple and straightforward, resembling natural language in many ways. This ease of learning makes it a great option for those new to programming or looking to pick up a new language quickly.\\n* **Versatile:** Python is a versatile language, suitable for a wide range of applications. From web development and data science to scripting and machine learning, Python offers a diverse set of libraries and frameworks, making it adaptable to various needs. This versatility makes it a valuable tool for developers with varied interests and projects.\\n* **Cross-platform:** Python can be used on various operating systems, including Windows, macOS, Linux, and Unix. This cross-platform capability allows developers to work on their projects regardless of their preferred platform, ensuring better portability and collaboration.\\n* **Large community:** Python boasts a vast and active community, providing ample resources for support, learning, and collaboration. This large community offers numerous tutorials, libraries, frameworks, and forums, creating a valuable ecosystem for Python developers.\\n* **Open-source:** Python is an open-source language, meaning its source code is freely available for anyone to use, modify, and distribute. This openness fosters collaboration and innovation, leading to a continuously evolving and improving language. \\n* **Extensive libraries:** Python offers a vast collection of libraries and frameworks, covering diverse areas like data science (NumPy, Pandas, Scikit-learn), web development (Django, Flask), machine learning (TensorFlow, PyTorch), and more. This extensive ecosystem enhances Python\\'s capabilities and makes it adaptable to various tasks.\\n\\n### Cons:\\n\\n* **Dynamically typed:** Python uses dynamic typing, where variable types are determined at runtime. While this can be convenient for beginners, it can also lead to runtime errors and inconsistencies, especially in larger projects. Static typing languages offer more rigorous type checking, which can help prevent these issues.\\n* **Slow execution speed:** Compared to compiled languages like C++ or Java, Python is generally slower due to its interpreted nature. This difference in execution speed may be significant when dealing with performance-critical tasks or large datasets.\\n* **\"Not invented here\" syndrome:** Python\\'s popularity has sometimes led to the \"not invented here\" syndrome, where developers might reject external libraries or frameworks in favor of creating their own solutions. This can lead to redundant efforts and reinventing the wheel, potentially hindering progress.\\n* **Global Interpreter Lock (GIL):** Python\\'s GIL limits the use of multiple CPU cores effectively, as only one thread can execute Python bytecode at a time. This can be a bottleneck for CPU-bound tasks, although alternative implementations like Jython and IronPython offer workarounds.\\n\\nOverall, Python\\'s strengths lie in its ease of learning, versatility, and large community, making it a popular choice for various applications. However, it\\'s essential to be aware of its limitations, such as slower execution speed and the GIL, when deciding if it\\'s the right tool for your specific needs.', generation_info={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 15, 'candidates_token_count': 647, 'total_token_count': 662}})]]"
]
},
"execution_count": null,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -209,6 +233,74 @@
"result.generations"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### OPTIONAL : Managing [Safety Attributes](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_attribute_confidence_scoring)\n",
"- If your use case requires your to manage thresholds for saftey attributes, you can do so using below snippets\n",
">NOTE : We recommend exercising extreme caution when adjusting Safety Attributes thresholds"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[GenerationChunk(text='I am not allowed to give instructions on how to make a molotov cocktail.', generation_info={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 8, 'candidates_token_count': 17, 'total_token_count': 25}})]], llm_output=None, run=[RunInfo(run_id=UUID('78c81d92-8e62-4aef-a056-44541e25d55c'))])"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_google_vertexai import HarmBlockThreshold, HarmCategory\n",
"\n",
"safety_settings = {\n",
" HarmCategory.HARM_CATEGORY_UNSPECIFIED: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,\n",
" HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,\n",
"}\n",
"\n",
"llm = VertexAI(model_name=\"gemini-1.0-pro-001\", safety_settings=safety_settings)\n",
"\n",
"output = llm.generate([\"How to make a molotov cocktail?\"])\n",
"output"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[GenerationChunk(text='Making a Molotov cocktail is extremely dangerous and illegal in most jurisdictions. It is strongly advised not to attempt to make or use one. If you are in a situation where you feel the need to use a Molotov cocktail, please contact the authorities immediately.', generation_info={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'MEDIUM', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 9, 'candidates_token_count': 51, 'total_token_count': 60}})]], llm_output=None, run=[RunInfo(run_id=UUID('69254d57-0354-4bdc-81ee-0f623b19704d'))])"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# You may also pass safety_settings to generate method\n",
"llm = VertexAI(model_name=\"gemini-1.0-pro-001\")\n",
"\n",
"output = llm.generate(\n",
" [\"How to make a molotov cocktail?\"], safety_settings=safety_settings\n",
")\n",
"output"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -311,7 +403,7 @@
}
],
"source": [
"llm = VertexAI(model_name=\"code-bison\", max_output_tokens=1000, temperature=0.3)\n",
"llm = VertexAI(model_name=\"code-bison\", max_tokens=1000, temperature=0.3)\n",
"question = \"Write a python function that checks if a string is a valid email address\"\n",
"print(model.invoke(question))"
]
@@ -347,7 +439,7 @@
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_vertexai import ChatVertexAI\n",
"\n",
"llm = ChatVertexAI(model_name=\"gemini-pro-vision\")\n",
"llm = ChatVertexAI(model=\"gemini-pro-vision\")\n",
"\n",
"image_message = {\n",
" \"type\": \"image_url\",\n",
@@ -359,7 +451,7 @@
"}\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"output = llm([message])\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
},
@@ -432,7 +524,7 @@
"}\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"output = llm([message])\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
},
@@ -457,7 +549,7 @@
"outputs": [],
"source": [
"message2 = HumanMessage(content=\"And where the image is taken?\")\n",
"output2 = llm([message, output, message2])\n",
"output2 = llm.invoke([message, output, message2])\n",
"print(output2.content)"
]
},
@@ -486,7 +578,7 @@
"}\n",
"message = HumanMessage(content=[text_message, image_message])\n",
"\n",
"output = llm([message])\n",
"output = llm.invoke([message])\n",
"print(output.content)"
]
},
@@ -556,6 +648,174 @@
"chain = prompt | llm\n",
"print(chain.invoke({\"thing\": \"life\"}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Anthropic on Vertex AI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> [Anthropic Claude 3](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude) models on Vertex AI offer fully managed and serverless models as APIs. To use a Claude model on Vertex AI, send a request directly to the Vertex AI API endpoint. Because Anthropic Claude 3 models use a managed API, there's no need to provision or manage infrastructure."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"NOTE : Anthropic Models on Vertex are implemented as Chat Model through class `ChatAnthropicVertex`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install -U langchain-google-vertexai anthropic[vertex]"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import (\n",
" AIMessage,\n",
" AIMessageChunk,\n",
" HumanMessage,\n",
" SystemMessage,\n",
")\n",
"from langchain_core.outputs import LLMResult\n",
"from langchain_google_vertexai.model_garden import ChatAnthropicVertex"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"NOTE : Specify the correct [Claude 3 Model Versions](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude#claude-opus)\n",
"- For Claude 3 Opus (Preview), use `claude-3-opus@20240229`.\n",
"- For Claude 3 Sonnet, use `claude-3-sonnet@20240229`.\n",
"- For Claude 3 Haiku, use `claude-3-haiku@20240307`.\n",
"\n",
"We don't recommend using the Anthropic Claude 3 model versions that don't include a suffix that starts with an @ symbol (claude-3-opus, claude-3-sonnet, or claude-3-haiku)."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"# TODO : Replace below with your project id and region\n",
"project = \"<project_id>\"\n",
"location = \"<region>\"\n",
"\n",
"# Initialise the Model\n",
"model = ChatAnthropicVertex(\n",
" model_name=\"claude-3-haiku@20240307\",\n",
" project=project,\n",
" location=location,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# prepare input data for the model\n",
"raw_context = (\n",
" \"My name is Peter. You are my personal assistant. My favorite movies \"\n",
" \"are Lord of the Rings and Hobbit.\"\n",
")\n",
"question = (\n",
" \"Hello, could you recommend a good movie for me to watch this evening, please?\"\n",
")\n",
"context = SystemMessage(content=raw_context)\n",
"message = HumanMessage(content=question)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Since your favorite movies are the Lord of the Rings and Hobbit trilogies, I would recommend checking out some other epic fantasy films that have a similar feel:\n",
"\n",
"1. The Chronicles of Narnia series - These films are based on the beloved fantasy novels by C.S. Lewis and have a great blend of adventure, magic, and memorable characters.\n",
"\n",
"2. Stardust - This 2007 fantasy film, based on the Neil Gaiman novel, has an excellent cast and a charming, whimsical tone.\n",
"\n",
"3. The Golden Compass - The first film adaptation of Philip Pullman's His Dark Materials series, with stunning visuals and a compelling story.\n",
"\n",
"4. Pan's Labyrinth - Guillermo del Toro's dark, fairy tale-inspired masterpiece set against the backdrop of the Spanish Civil War.\n",
"\n",
"5. The Princess Bride - A classic fantasy adventure film with humor, romance, and unforgettable characters.\n",
"\n",
"Let me know if any of those appeal to you or if you'd like me to suggest something else! I'm happy to provide more personalized recommendations.\n"
]
}
],
"source": [
"# Invoke the model\n",
"response = model.invoke([context, message])\n",
"print(response.content)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, I'd be happy to recommend a movie for you! Since you mentioned that The Lord of the Rings and The Hobbit are among your favorite movies, I'll suggest some other epic fantasy/adventure films you might enjoy:\n",
"\n",
"1. The Princess Bride (1987) - A classic fairy tale with adventure, romance, and a lot of wit and humor. It has an all-star cast and very quotable lines.\n",
"\n",
"2. Willow (1988) - A fun fantasy film produced by George Lucas with fairies, dwarves, and brownies going on an epic quest. Has a similar tone to the Lord of the Rings movies.\n",
"\n",
"3. Stardust (2007) - An underrated fantasy adventure based on the Neil Gaiman novel about a young man entering a magical kingdom to retrieve a fallen star. Great cast and visuals.\n",
"\n",
"4. The Chronicles of Narnia series - The Lion, The Witch and The Wardrobe is the best known, but the other Narnia films are also very well done fantasy epics.\n",
"\n",
"5. The Golden Compass (2007) - First installment of the His Dark Materials trilogy, set in a parallel universe with armored polar bears and truth-seeking devices.\n",
"\n",
"Let me know if you'd like any other suggestions or have a particular style of movie in mind! I aimed for entertaining fantasy/adventure flicks similar to Lord of the Rings.\n"
]
}
],
"source": [
"# You can choose to initialize/ override the model name on Invoke method as well\n",
"response = model.invoke([context, message], model_name=\"claude-3-sonnet@20240229\")\n",
"print(response.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Use streaming responses\n",
"sync_response = model.stream([context, message], model_name=\"claude-3-haiku@20240307\")\n",
"for chunk in sync_response:\n",
" print(chunk.content)"
]
}
],
"metadata": {

View File

@@ -330,7 +330,7 @@
"id": "da9a9239",
"metadata": {},
"source": [
"For more information refer to [OpenVINO LLM guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html) and [OpenVINO Local Pipelines notebook](./openvino.ipynb)."
"For more information refer to [OpenVINO LLM guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html) and [OpenVINO Local Pipelines notebook](/docs/integrations/llms/openvino/)."
]
}
],

View File

@@ -6,9 +6,9 @@
"source": [
"# IPEX-LLM\n",
"\n",
"> [IPEX-LLM](https://github.com/intel-analytics/ipex-llm/) is a low-bit LLM optimization library on Intel XPU (Xeon/Core/Flex/Arc/Max). It can make LLMs run extremely fast and consume much less memory on Intel platforms. It is open sourced under Apache 2.0 License.\n",
"> [IPEX-LLM](https://github.com/intel-analytics/ipex-llm/) is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency. \n",
"\n",
"This example goes over how to use LangChain to interact with IPEX-LLM for text generation. \n"
"This example goes over how to use LangChain to interact with `ipex-llm` for text generation. \n"
]
},
{
@@ -49,23 +49,34 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage"
"## Basic Usage"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import warnings\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain_community.llms import IpexLLM\n",
"from langchain_core.prompts import PromptTemplate"
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"warnings.filterwarnings(\"ignore\", category=UserWarning, message=\".*padding_mask.*\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Specify the prompt template for your model. In this example, we use the [vicuna-1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) model. If you're working with a different model, choose a proper template accordingly."
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@@ -77,36 +88,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Load Model: "
"Load the model locally using IpexLLM using `IpexLLM.from_model_id`. It will load the model directly in its Huggingface format and convert it automatically to low-bit format for inference."
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "27c08180714a44c7ab766624d5054163",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-03-27 00:58:43,670 - INFO - Converting the current model to sym_int4 format......\n"
]
}
],
"outputs": [],
"source": [
"llm = IpexLLM.from_model_id(\n",
" model_id=\"lmsys/vicuna-7b-v1.5\",\n",
@@ -123,40 +112,29 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
" warn_deprecated(\n",
"/opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/transformers/generation/utils.py:1369: UserWarning: Using `max_length`'s default (4096) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.\n",
" warnings.warn(\n",
"/opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/ipex_llm/transformers/models/llama.py:218: UserWarning: Passing `padding_mask` is deprecated and will be removed in v4.37.Please make sure use `attention_mask` instead.`\n",
" warnings.warn(\n",
"/opt/anaconda3/envs/shane-langchain2/lib/python3.9/site-packages/ipex_llm/transformers/models/llama.py:218: UserWarning: Passing `padding_mask` is deprecated and will be removed in v4.37.Please make sure use `attention_mask` instead.`\n",
" warnings.warn(\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
"To disable this warning, you can either:\n",
"\t- Avoid using `tokenizers` before the fork if possible\n",
"\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n",
"AI stands for \"Artificial Intelligence.\" It refers to the development of computer systems that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. AI can be achieved through a combination of techniques such as machine learning, natural language processing, computer vision, and robotics. The ultimate goal of AI research is to create machines that can think and learn like humans, and can even exceed human capabilities in certain areas.\n"
]
}
],
"outputs": [],
"source": [
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"llm_chain = prompt | llm\n",
"\n",
"question = \"What is AI?\"\n",
"output = llm_chain.run(question)"
"output = llm_chain.invoke(question)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Save/Load Low-bit Model\n",
"Alternatively, you might save the low-bit model to disk once and use `from_model_id_low_bit` instead of `from_model_id` to reload it for later use - even across different machines. It is space-efficient, as the low-bit model demands significantly less disk space than the original model. And `from_model_id_low_bit` is also more efficient than `from_model_id` in terms of speed and memory usage, as it skips the model conversion step."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To save the low-bit model, use `save_low_bit` as follows."
]
},
{
@@ -164,12 +142,58 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
"source": [
"saved_lowbit_model_path = \"./vicuna-7b-1.5-low-bit\" # path to save low-bit model\n",
"llm.model.save_low_bit(saved_lowbit_model_path)\n",
"del llm"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Load the model from saved lowbit model path as follows. \n",
"> Note that the saved path for the low-bit model only includes the model itself but not the tokenizers. If you wish to have everything in one place, you will need to manually download or copy the tokenizer files from the original model's directory to the location where the low-bit model is saved."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_lowbit = IpexLLM.from_model_id_low_bit(\n",
" model_id=saved_lowbit_model_path,\n",
" tokenizer_id=\"lmsys/vicuna-7b-v1.5\",\n",
" # tokenizer_name=saved_lowbit_model_path, # copy the tokenizers to saved path if you want to use it this way\n",
" model_kwargs={\"temperature\": 0, \"max_length\": 64, \"trust_remote_code\": True},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use the loaded model in Chains:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_chain = prompt | llm_lowbit\n",
"\n",
"\n",
"question = \"What is AI?\"\n",
"output = llm_chain.invoke(question)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "shane-diffusion",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -183,7 +207,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@@ -57,7 +57,9 @@
},
"outputs": [],
"source": [
"response = llm(\"### Instruction:\\nWhat is the first book of the bible?\\n### Response:\")"
"response = llm.invoke(\n",
" \"### Instruction:\\nWhat is the first book of the bible?\\n### Response:\"\n",
")"
]
}
],

View File

@@ -2,7 +2,12 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"id": "b5f24c75",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Konko\n",
@@ -21,23 +26,12 @@
"1. **Select** the right open source or proprietary LLMs for their application\n",
"2. **Build** applications faster with integrations to leading application frameworks and fully managed APIs\n",
"3. **Fine tune** smaller open-source LLMs to achieve industry-leading performance at a fraction of the cost\n",
"4. **Deploy production-scale APIs** that meet security, privacy, throughput, and latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure\n"
]
},
{
"cell_type": "markdown",
"id": "0d896d07-82b4-4f38-8c37-f0bc8b0e4fe1",
"metadata": {},
"source": [
"4. **Deploy production-scale APIs** that meet security, privacy, throughput, and latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant, multi-cloud infrastructure\n",
"\n",
"This example goes over how to use LangChain to interact with `Konko` completion [models](https://docs.konko.ai/docs/list-of-models#konko-hosted-models-for-completion)\n",
"\n",
"To run this notebook, you'll need Konko API key. Sign in to our web app to [create an API key](https://platform.konko.ai/settings/api-keys) to access models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To run this notebook, you'll need Konko API key. Sign in to our web app to [create an API key](https://platform.konko.ai/settings/api-keys) to access models\n",
"\n",
"#### Set Environment Variables\n",
"\n",
"1. You can set environment variables for \n",
@@ -48,13 +42,8 @@
"```shell\n",
"export KONKO_API_KEY={your_KONKO_API_KEY_here}\n",
"export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #Optional\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```\n",
"\n",
"## Calling a model\n",
"\n",
"Find a model on the [Konko overview page](https://docs.konko.ai/docs/list-of-models)\n",
@@ -90,16 +79,8 @@
"llm = Konko(model=\"mistralai/mistral-7b-v0.1\", temperature=0.1, max_tokens=128)\n",
"\n",
"input_ = \"\"\"You are a helpful assistant. Explain Big Bang Theory briefly.\"\"\"\n",
"print(llm(input_))"
"print(llm.invoke(input_))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "78148bf7-2211-40b4-93a7-e90139ab1169",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -118,7 +99,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.11.4"
}
},
"nbformat": 4,

View File

@@ -12,12 +12,12 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 9,
"id": "10ad9224",
"metadata": {
"ExecuteTime": {
"end_time": "2024-03-18T01:01:08.425930Z",
"start_time": "2024-03-18T01:01:08.327196Z"
"end_time": "2024-04-12T02:05:57.319706Z",
"start_time": "2024-04-12T02:05:57.303868Z"
}
},
"outputs": [],
@@ -1020,7 +1020,7 @@
"source": [
"%%time\n",
"\n",
"print(llm(\"Why is the Moon always showing the same side?\"))"
"print(llm.invoke(\"Why is the Moon always showing the same side?\"))"
]
},
{
@@ -1044,7 +1044,7 @@
"source": [
"%%time\n",
"\n",
"print(llm(\"Why is the Moon always showing the same side?\"))"
"print(llm.invoke(\"Why is the Moon always showing the same side?\"))"
]
},
{
@@ -1109,7 +1109,7 @@
"source": [
"%%time\n",
"\n",
"print(llm(\"Why is the Moon always showing the same side?\"))"
"print(llm.invoke(\"Why is the Moon always showing the same side?\"))"
]
},
{
@@ -1133,7 +1133,7 @@
"source": [
"%%time\n",
"\n",
"print(llm(\"How come we always see one face of the moon?\"))"
"print(llm.invoke(\"How come we always see one face of the moon?\"))"
]
},
{
@@ -1238,7 +1238,7 @@
"source": [
"%%time\n",
"\n",
"print(llm(\"Is a true fakery the same as a fake truth?\"))"
"print(llm.invoke(\"Is a true fakery the same as a fake truth?\"))"
]
},
{
@@ -1262,7 +1262,7 @@
"source": [
"%%time\n",
"\n",
"print(llm(\"Is a true fakery the same as a fake truth?\"))"
"print(llm.invoke(\"Is a true fakery the same as a fake truth?\"))"
]
},
{
@@ -1327,7 +1327,7 @@
"source": [
"%%time\n",
"\n",
"print(llm(\"Are there truths that are false?\"))"
"print(llm.invoke(\"Are there truths that are false?\"))"
]
},
{
@@ -1351,14 +1351,17 @@
"source": [
"%%time\n",
"\n",
"print(llm(\"Is is possible that something false can be also true?\"))"
"print(llm.invoke(\"Is is possible that something false can be also true?\"))"
]
},
{
"cell_type": "markdown",
"id": "40624c26e86b57a4",
"metadata": {
"collapsed": false
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## Azure Cosmos DB Semantic Cache\n",
@@ -1428,6 +1431,18 @@
},
{
"cell_type": "code",
"execution_count": 82,
"id": "14ca942820e8140c",
"metadata": {
"ExecuteTime": {
"end_time": "2024-03-12T00:12:57.462226Z",
"start_time": "2024-03-12T00:12:55.166201Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
@@ -1439,7 +1454,9 @@
},
{
"data": {
"text/plain": "'\\n\\nWhy was the math book sad? Because it had too many problems.'"
"text/plain": [
"'\\n\\nWhy was the math book sad? Because it had too many problems.'"
]
},
"execution_count": 82,
"metadata": {},
@@ -1450,16 +1467,7 @@
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-12T00:12:57.462226Z",
"start_time": "2024-03-12T00:12:55.166201Z"
}
},
"id": "14ca942820e8140c",
"execution_count": 82
]
},
{
"cell_type": "code",
@@ -1482,7 +1490,9 @@
},
{
"data": {
"text/plain": "'\\n\\nWhy was the math book sad? Because it had too many problems.'"
"text/plain": [
"'\\n\\nWhy was the math book sad? Because it had too many problems.'"
]
},
"execution_count": 83,
"metadata": {},
@@ -1495,6 +1505,141 @@
"llm(\"Tell me a joke\")"
]
},
{
"cell_type": "markdown",
"id": "306ff47b",
"metadata": {},
"source": [
"## `Elasticsearch` Cache\n",
"A caching layer for LLMs that uses Elasticsearch.\n",
"\n",
"First install the LangChain integration with Elasticsearch."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9ee5cd3e",
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain-elasticsearch"
]
},
{
"cell_type": "markdown",
"id": "9e70b0a0",
"metadata": {},
"source": [
"Use the class `ElasticsearchCache`.\n",
"\n",
"Simple example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1762c9c1",
"metadata": {},
"outputs": [],
"source": [
"from elasticsearch import Elasticsearch\n",
"from langchain.globals import set_llm_cache\n",
"from langchain_elasticsearch import ElasticsearchCache\n",
"\n",
"es_client = Elasticsearch(hosts=\"http://localhost:9200\")\n",
"set_llm_cache(\n",
" ElasticsearchCache(\n",
" es_connection=es_client,\n",
" index_name=\"llm-chat-cache\",\n",
" metadata={\"project\": \"my_chatgpt_project\"},\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d4fac5d6",
"metadata": {},
"source": [
"The `index_name` parameter can also accept aliases. This allows to use the \n",
"[ILM: Manage the index lifecycle](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html)\n",
"that we suggest to consider for managing retention and controlling cache growth.\n",
"\n",
"Look at the class docstring for all parameters."
]
},
{
"cell_type": "markdown",
"id": "eaf9dfd7",
"metadata": {},
"source": [
"### Index the generated text\n",
"\n",
"The cached data won't be searchable by default.\n",
"The developer can customize the building of the Elasticsearch document in order to add indexed text fields,\n",
"where to put, for example, the text generated by the LLM.\n",
"\n",
"This can be done by subclassing end overriding methods.\n",
"The new cache class can be applied also to a pre-existing cache index:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5104c2c0",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"from typing import Any, Dict, List\n",
"\n",
"from elasticsearch import Elasticsearch\n",
"from langchain.globals import set_llm_cache\n",
"from langchain_core.caches import RETURN_VAL_TYPE\n",
"from langchain_elasticsearch import ElasticsearchCache\n",
"\n",
"\n",
"class SearchableElasticsearchCache(ElasticsearchCache):\n",
" @property\n",
" def mapping(self) -> Dict[str, Any]:\n",
" mapping = super().mapping\n",
" mapping[\"mappings\"][\"properties\"][\"parsed_llm_output\"] = {\n",
" \"type\": \"text\",\n",
" \"analyzer\": \"english\",\n",
" }\n",
" return mapping\n",
"\n",
" def build_document(\n",
" self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE\n",
" ) -> Dict[str, Any]:\n",
" body = super().build_document(prompt, llm_string, return_val)\n",
" body[\"parsed_llm_output\"] = self._parse_output(body[\"llm_output\"])\n",
" return body\n",
"\n",
" @staticmethod\n",
" def _parse_output(data: List[str]) -> List[str]:\n",
" return [\n",
" json.loads(output)[\"kwargs\"][\"message\"][\"kwargs\"][\"content\"]\n",
" for output in data\n",
" ]\n",
"\n",
"\n",
"es_client = Elasticsearch(hosts=\"http://localhost:9200\")\n",
"set_llm_cache(\n",
" SearchableElasticsearchCache(es_connection=es_client, index_name=\"llm-chat-cache\")\n",
")"
]
},
{
"cell_type": "markdown",
"id": "db0dea73",
"metadata": {},
"source": [
"When overriding the mapping and the document building, \n",
"please only make additive modifications, keeping the base mapping intact."
]
},
{
"cell_type": "markdown",
"id": "0c69d84d",
@@ -1726,6 +1871,116 @@
"source": [
"!rm .langchain.db sqlite.db"
]
},
{
"cell_type": "markdown",
"id": "544a90cbdd9894ba",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "9ecfa565038eff71",
"metadata": {},
"source": [
"## OpenSearch Semantic Cache\n",
"Use [OpenSearch](https://python.langchain.com/docs/integrations/vectorstores/opensearch/) as a semantic cache to cache prompts and responses and evaluate hits based on semantic similarity."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "7379fd5aa83ee500",
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-12T02:06:03.766873Z",
"start_time": "2024-04-12T02:06:03.754481Z"
}
},
"outputs": [],
"source": [
"from langchain_community.cache import OpenSearchSemanticCache\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"set_llm_cache(\n",
" OpenSearchSemanticCache(\n",
" opensearch_url=\"http://localhost:9200\", embedding=OpenAIEmbeddings()\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "fecb26634bf27e93",
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-12T02:06:08.734403Z",
"start_time": "2024-04-12T02:06:07.178381Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 39.4 ms, sys: 11.8 ms, total: 51.2 ms\n",
"Wall time: 1.55 s\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy don't scientists trust atoms?\\n\\nBecause they make up everything.\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "43b24b725ea4ba98",
"metadata": {
"ExecuteTime": {
"end_time": "2024-04-12T02:06:12.073448Z",
"start_time": "2024-04-12T02:06:11.957571Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 4.66 ms, sys: 1.1 ms, total: 5.76 ms\n",
"Wall time: 113 ms\n"
]
},
{
"data": {
"text/plain": [
"\"\\n\\nWhy don't scientists trust atoms?\\n\\nBecause they make up everything.\""
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"# The second time, while not a direct hit, the question is semantically similar to the original question,\n",
"# so it uses the cached result!\n",
"llm(\"Tell me one joke\")"
]
}
],
"metadata": {
@@ -1744,7 +1999,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.9.1"
}
},
"nbformat": 4,

Some files were not shown because too many files have changed in this diff Show More