mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-21 15:07:35 +00:00
This PR adds support for Databricks runtime and Databricks SQL by using [Databricks SQL Connector for Python](https://docs.databricks.com/dev-tools/python-sql-connector.html). As a cloud data platform, accessing Databricks requires a URL as follows `databricks://token:{api_token}@{hostname}?http_path={http_path}&catalog={catalog}&schema={schema}`. **The URL is **complicated** and it may take users a while to figure it out**. Since the fields `api_token`/`hostname`/`http_path` fields are known in the Databricks notebook, I am proposing a new method `from_databricks` to simplify the connection to Databricks. ## In Databricks Notebook After changes, Databricks users only need to specify the `catalog` and `schema` field when using langchain. <img width="881" alt="image" src="https://github.com/hwchase17/langchain/assets/1097932/984b4c57-4c2d-489d-b060-5f4918ef2f37"> ## In Jupyter Notebook The method can be used on the local setup as well: <img width="678" alt="image" src="https://github.com/hwchase17/langchain/assets/1097932/142e8805-a6ef-4919-b28e-9796ca31ef19"> |
||
---|---|---|
.. | ||
api.ipynb | ||
constitutional_chain.ipynb | ||
flare.ipynb | ||
llm_bash.ipynb | ||
llm_checker.ipynb | ||
llm_math.ipynb | ||
llm_requests.ipynb | ||
llm_summarization_checker.ipynb | ||
moderation.ipynb | ||
multi_prompt_router.ipynb | ||
multi_retrieval_qa_router.ipynb | ||
openai_openapi.yaml | ||
openapi.ipynb | ||
pal.ipynb | ||
sqlite.ipynb |