Compare commits

...

3 Commits

Author SHA1 Message Date
vowelparrot
78aa32a0bc Add Datasets Info 2023-05-17 13:08:09 -07:00
vowelparrot
36fb2e20bf words 2023-05-17 10:09:09 -07:00
vowelparrot
de1d1c9020 Update Tracing Docs 2023-05-17 09:27:56 -07:00
7 changed files with 486 additions and 80 deletions

View File

@@ -1,57 +1,107 @@
# Tracing
# LangChain Tracing
By enabling tracing in your LangChain runs, youll be able to more effectively visualize, step through, and debug your chains and agents.
LangChain Plus helps you visualize, monitor, and evaluate LLM applications. To get started with local or hosted tracing, use one of the following guides.
First, you should install tracing and set up your environment properly.
You can use either a locally hosted version of this (uses Docker) or a cloud hosted version (in closed alpha).
If you're interested in using the hosted platform, please fill out the form [here](https://forms.gle/tRCEMSeopZf6TE3b6).
- [Locally Hosted Tracing](../tracing/local_installation.md)
- [Cloud Hosted Tracing](../tracing/hosted_installation.md)
- [Locally Hosted Setup](../tracing/local_installation.md)
- [Cloud Hosted Setup](../tracing/hosted_installation.md)
_Our hosted alpha is currently invite-only. To sign up for the wait list, please fill out the form [here](https://forms.gle/tRCEMSeopZf6TE3b6)._
## Tracing Walkthrough
When you first access the UI, you should see a page with your tracing sessions.
An initial one "default" should already be created for you.
A session is just a way to group traces together.
If you click on a session, it will take you to a page with no recorded traces that says "No Runs."
You can create a new session with the new session form.
## Saving Traces
![](../tracing/homepage.png)
Once you've launched the local tracing server or made an account and retrieved an API key to the hosted solution, your LangChain application will automatically log traces as long as
you set the following environment variables:
```bash
export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_SESSION="my session name" # Otherwise, traces are stored in the "default" session
# export LANGCHAIN_ENDPOINT="https://api.langchain.plus" # Uncomment if using hosted server
# export LANGCHAIN_API_KEY="my api key" # Uncomment and add your API key generated from the settings page if using a hosted server
```
As long as these variables are correctly set, and the server is online, all your LangChain runs will be saved. You can interact with these traces in the UI or using the `LangChainPlus` client.
## Tracing UI Walkthrough
When you first access the LangChain Plus UI (and after signing in, if you are using the hosted version), you should be greeted by the home screen with more instructions on how to get started.
From here, you can navigate to the `Sessions` and `Datasets` pages. For more information on using datasets in LangChain Plus, check out the [Datasets](../tracing/datasets.md) guide.
Traces from your LangChain runs can be found in the `Sessions` page. A "default" session should already be created for you.
A session is just a way to group traces together. If you click on a session, it will take you to a page that with no recorded runs.
You can create and save traces to new sessions by specifying the `LANGCHAIN_SESSION` environment variable in your LangChain application. You can check out the [Chainging Sessions](#changing-sessions) section below for more configuration options.
<!-- TODO Add screenshots when the UI settles down a bit -->
<!-- ![](../tracing/homepage.png) -->
If we click on the `default` session, we can see that to start we have no traces stored.
![](../tracing/default_empty.png)
<!-- TODO Add screenshots when the UI settles down a bit -->
<!-- ![](../tracing/default_empty.png) -->
If we now start running chains and agents with tracing enabled, we will see data show up here.
To do so, we can run [this notebook](../tracing/agent_with_tracing.ipynb) as an example.
After running it, we will see an initial trace show up.
![](../tracing/first_trace.png)
<!-- To do so, we can run [this notebook](../tracing/agent_with_tracing.ipynb) as an example. After running it, we will see an initial trace show up. -->
<!-- TODO Add screenshots when the UI settles down a bit -->
From here we can explore the trace at a high level by clicking on the arrow to show nested runs.
We can keep on clicking further and further down to explore deeper and deeper.
![](../tracing/explore.png)
<!-- TODO Add screenshots when the UI settles down a bit -->
<!-- ![](../tracing/explore.png) -->
We can also click on the "Explore" button of the top level run to dive even deeper.
Here, we can see the inputs and outputs in full, as well as all the nested traces.
![](../tracing/explore_trace.png)
<!-- TODO Add screenshots when the UI settles down a bit -->
<!-- ![](../tracing/explore_trace.png) -->
We can keep on exploring each of these nested traces in more detail.
For example, here is the lowest level trace with the exact inputs/outputs to the LLM.
![](../tracing/explore_llm.png)
<!-- TODO Add screenshots when the UI settles down a bit -->
<!-- ![](../tracing/explore_llm.png) -->
## Changing Sessions
1. To initially record traces to a session other than `"default"`, you can set the `LANGCHAIN_SESSION` environment variable to the name of the session you want to record to:
1. To record traces to a session other than `"default"`, you can set the `LANGCHAIN_SESSION` environment variable to the name of the session you want to record to:
```python
import os
os.environ["LANGCHAIN_TRACING"] = "true"
os.environ["LANGCHAIN_SESSION"] = "my_session" # Make sure this session actually exists. You can create a new session in the UI.
```
```python
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_SESSION"] = "my_session"
```
2. To switch sessions mid-script or mid-notebook, do NOT set the `LANGCHAIN_SESSION` environment variable. Instead: `langchain.set_tracing_callback_manager(session_name="my_session")`
2. To switch sessions mid-script or mid-notebook, you have a few options:
a. Explicitly pass in a new `LangChainTracer` callback
```python
from langchain.callbacks.tracers import LangChainTracer
tracer = LangChainTracer(session_name="My new session")
agent.run("How many people live in canada as of 2023?", callbacks=[tracer])
```
b. Use the `tracing_v2_enabled` context manager:
```python
import os
from langchain.callbacks.manager import tracing_v2_enabled
os.environ["LANGCHAIN_SESSION"] = "my_session"
# ... traces logged to "my_session" ...
with tracing_v2_enabled("My Scoped Session Name"):
# ... traces logged to "My New Session" ...
```
c. Update the `LANGCHAIN_SESSION` environment variable (not thread safe)
```python
import os
os.environ["LANGCHAIN_SESSION"] = "my_session"
# ... traces logged to 'my_session' ...
os.environ["LANGCHAIN_SESSION"] = "My New Session"
# ... traces logged to "My New Session" ...
```

180
docs/tracing/datasets.md Normal file
View File

@@ -0,0 +1,180 @@
# Datasets
This guide provides instructions on how to use traced Datasets in LangChain Plus.
Datasets are broadly useful for developing and productionizing LLM applications, as they enable:
- comparing the results of different models or prompts to pick the most appropriate configuration.
- tesing for regressions in LLM or agent behavior over known use cases.
- running a model N times to measure the stability of its predictions and infer the reliability of its performance.
- running an evaluation chain over your agents' outputs to quantify your agents' performance.
As well as many other applications.
## Creating a Dataset
Datasets store examples holding the inputs and outputs (or 'ground truth' labels) of LLM, chat model, or chain or agent runs.
You can create datasets using the `LangChainPlusClient` (which connects to the tracing server's REST API) or in the UI.
## Using the UI
You can directly create Datasets in the UI in two ways:
- **Upload data from a CSV file.**
1. Click on the `Datasets` page in the LangChain Plus homepage or click `Menu` in the top right-hand corner and click 'Datasets'
2. Click "Upload CSV"
3. Upload the CSV, and specify the column names that represent the LLM or Chain's inputs and outputs
<!-- TODO: Add a screenshot -->
- **Convert traced runs to a Dataset**
1. Navigate to a Session containing runs.
2. For rows you wish to add, click on the "+" sign on the right-hand side of the row to "Add Example to Dataset"
3. Either "Create dataset" or select an existing one to design where to add. If you wish to update the expected output, you can updates the text in the box.
<!-- TODO: Add screenshots -->
## Using the LangChainPlusClient
The `LangChainPlusClient` connects to the tracing server's REST API. For more information on the client, please reference the [LangChain Plus Client](./langchain_plus_client.md) guide.
To create a client:
```python
# import os
# os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus" # Uncomment this line if you want to use the hosted version
# os.environ["LANGCHAIN_API_KEY"] = "<YOUR-LANGCHAINPLUS-API-KEY>" # Uncomment this line if you want to use the hosted version.
from langchain.client import LangChainPlusClient
client = LangChainPlusClient()
```
### Datasets and the LangChainPlusClient
The following are two simple ways to create a dataset with the client:
- **Upload data from a CSV or pandas DataFrame**
```python
csv_path = "path/too/data.csv"
input_keys = ["input"] # column names that will be input to Chain or LLM
output_keys = ["output"] # column names that are the output of the Chain or LLM
description = "My dataset for evaluation"
dataset = client.upload_csv(
csv_path,
description=description,
input_keys=input_keys,
output_keys=output_keys,
)
# Or as a DataFrame
import pandas as pd
df = pd.read_csv(csv_path)
dataset = client.upload_dataframe(
df,
"My Dataset",
description=description,
input_keys=input_keys,
output_keys=output_keys,
)
```
- **Create a dataset from traced runs.** Assuming you've already captured runs in a session called "My Agent Session":
```python
runs = client.list_runs(session_name="My Agent Session", error=False) # List runs in my session that don't have errors
dataset = client.create_dataset("My Dataset", "Examples from My Agent")
for run in runs:
client.create_example(inputs=run.inputs, outputs=run.outputs, dataset_id=dataset.id)
```
## Using Datasets
The `LangChainPlusClient` can help you flexibly run any LangChain object over your datasets. Below are a few common use cases.
Before running any of these examples, make sure you have created your client and datasets using one of the methods above.
```python
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_SESSION"] = "Tracing Walkthrough"
# os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus" # Uncomment this line if you want to use the hosted version
# os.environ["LANGCHAIN_API_KEY"] = "<YOUR-LANGCHAINPLUS-API-KEY>" # Uncomment this line if you want to use the hosted version.
client = LangChainPlusClient()
```
### Running LLMs over Datasets
Once you've created a dataset (we'll call it "LLM Dataset" here) with a string prompt input and generated outputs, you can
compare results by specifying other LLMs and running over the saved dataset.
```python
from langchain import OpenAI
dataset_name = "LLM Dataset" # Update to the correct dataset
llm = OpenAI(temperature=0)
llm_results = await client.arun_on_dataset(
dataset_name=dataset_name,
llm_or_chain_factory=llm,
)
```
The traces from this run will be saved in a new session linked to the dataset, and the model outputs
will be returned. You can also run the LLM synchronously if async isn't supported
(though this will likely take longer).
```python
Or to run the LLM synchronously
llm_results = client.run_on_dataset(
dataset_name=dataset_name,
llm_or_chain_factory=llm,
)
```
You can then view the UI to see the run results in a new session.
### Running Chat Models over Datasets
You can run Chat Models over datasets captured from LLM or Chat Model runs as well.
```python
from langchain.chat_models import ChatOpenAI
dataset_name = "Chat Model Dataset"
llm = OpenAI(temperature=0)
llm_results = await client.arun_on_dataset(
dataset_name=dataset_name,
llm_or_chain_factory=llm,
)
```
The synchronous `client.run_on_dataset` method is also available for chat models.
### Running Chains over Datasets
You can also run any chain or agent over stored datasets to do things like evaluate outputs and compare prompts, models, and tool usage.
Many chains contain `memory`, so to treat each example independently, we have to pass in a "chain factory" (or constructor) that tells the
client how to create the chain. This also means that chains that interact with remote/persistant storage must be configured appropriately to
avoid using the same memory across each example. If you know your chain does _not_ use memory, this factory can be a simple lambda (`lambda: my_agent`) that avoids
re-creating objects
```python
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
llm = ChatOpenAI(temperature=0)
tools = load_tools(['serpapi', 'llm-math'], llm=llm)
agent_factory = lambda : initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)
dataset_name = "Agent Dataset"
agent_results = await client.arun_on_dataset(
dataset_name=dataset_name,
llm_or_chain_factory=agent_factory,
)
```

View File

@@ -1,36 +1,75 @@
# Cloud Hosted Setup
# Cloud Hosted Tracing Setup
We offer a hosted version of tracing at [langchainplus.vercel.app](https://langchainplus.vercel.app/). You can use this to view traces from your run without having to run the server locally.
This guide provides instructions for setting up your environment to use the cloud-hosted version of the LangChain Plus tracing server. For instructions on locally hosted tracing, please reference the [Locally Hosted Tracing Setup](./local_installation.md) guide.
Note: we are currently only offering this to a limited number of users. The hosted platform is VERY alpha, in active development, and data might be dropped at any time. Don't depend on data being persisted in the system long term and don't log traces that may contain sensitive information. If you're interested in using the hosted platform, please fill out the form [here](https://forms.gle/tRCEMSeopZf6TE3b6).
We offer a hosted version of tracing at the [LangChain Plus website](https://www.langchain.plus/). You can use this to interact with your traces and evaluation datasets without having to install the local server.
## Installation
**Note**: We are currently only offering this to a limited number of users. The hosted platform is in the alpha stage, actively under development, and data might be dropped at any time. Do not depend on data being persisted in the system long term and refrain from logging traces that may contain sensitive information. If you're interested in using the hosted platform, please fill out the form [here](https://forms.gle/tRCEMSeopZf6TE3b6).
1. Login to the system and click "API Key" in the top right corner. Generate a new key and keep it safe. You will need it to authenticate with the system.
## Setup
## Environment Setup
Follow these steps to set up your environment to use the cloud-hosted tracing server:
After installation, you must now set up your environment to use tracing.
1. Log in to the system and click "API Key" in the top right corner. Generate a new key and assign it to the `LANGCHAIN_API_KEY` environment variable.
This can be done by setting an environment variable in your terminal by running `export LANGCHAIN_HANDLER=langchain`.
You can also do this by adding the below snippet to the top of every script. **IMPORTANT:** this must go at the VERY TOP of your script, before you import anything from `langchain`.
```python
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
```bash
export LANGCHAIN_API_KEY="your api key"
```
You will also need to set an environment variable to specify the endpoint and your API key. This can be done with the following environment variables:
## Environment Configuration
1. `LANGCHAIN_ENDPOINT` = "https://langchain-api-gateway-57eoxz8z.uc.gateway.dev"
2. `LANGCHAIN_API_KEY` - set this to the API key you generated during installation.
Once you've set up your account, configure your LangChain application's environment to use tracing. This can be done by setting an environment variable in your terminal by running:
An example of adding all relevant environment variables is below:
```bash
export LANGCHAIN_TRACING_V2=true
```
You can also add the following snippet to the top of every script:
```python
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
os.environ["LANGCHAIN_TRACING_V2"] = "true"
```
Additionally, you need to set an environment variables to specify the endpoint. You can do this with the following environment variable:
```bash
export LANGCHAIN_ENDPOINT="https://api.langchain.plus"
```
Here's an example of adding all relevant environment variables:
```bash
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_ENDPOINT="https://api.langchain.plus"
export LANGCHAIN_API_KEY="my api key"
# export LANGCHAIN_SESSION="My Session Name" # Optional, otherwise, traces are logged to the "default" session
```
Or in python:
```python
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://langchain-api-gateway-57eoxz8z.uc.gateway.dev"
os.environ["LANGCHAIN_API_KEY"] = "my_api_key" # Don't commit this to your repo! Better to set it in your terminal.
os.environ["LANGCHAIN_API_KEY"] = "my_api_key" # Don't commit this to your repo! Set it in your terminal instead.
# os.environ["LANGCHAIN_SESSION"] = "My Session Name" # Optional, otherwise, traces are logged to the "default" session
```
## Tracing Context Manager
Although using environment variables is recommended for most tracing use cases, you can also configure runs to be sent to a specific session using the context manager:
```python
from langchain.callbacks.manager import tracing_v2_enabled
with tracing_v2_enabled("My Session Name"):
...
```
## Congratulations!
Now that you've signed in, you can use the server to help debug, monitor, and evaluate your LangChain applications. What's next?
- For an overview of the LangChain Plus UI check out the [LangChain Tracing](../additional_resources/tracing.md) guide.
- For information on how to use your traces as datasets for testing and evaluation, check out the [Datasets](./datasets.md) guide.

View File

@@ -0,0 +1,53 @@
# LangChain Plus Client
The `LangChainPlusClient` is useful for interacting with a tracing server.
This guide explains how to make the client, how to connect to the server, and some of functionality it enables.
For more information on using the client to evaluate agents on Datasets, check out our [Datasets](./datasets.md) guide.
This guide assumes you already have a [hosted account](../tracing/hosted_installation.md) or are running the
[locally hosted tracing server](../tracing/local_installation.md).
## Installation
The `LangChainPlusClient` is included as a part of your `langchain` installation. To install or upgrade, run:
```bash
pip install -U langchain
```
## Creating LangChainPlusClient
The `LangChainPlusClient` connects to the tracing server's REST API. To create a client:
```python
# import os
# os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus" # Uncomment this line if you want to use the hosted version
# os.environ["LANGCHAIN_API_KEY"] = "<YOUR-LANGCHAINPLUS-API-KEY>" # Uncomment this line if you want to use the hosted version.
from langchain.client import LangChainPlusClient
client = LangChainPlusClient()
```
## Listing Sessions
You can easily interact with Runs, Sessions (groups of traced runs), and Datasets with the client.
For instance, to retrieve all of the top level runs in the default session, run:
```python
session_name = "default"
runs = client.list_runs(session_name=session_name)
```
To list sessions:
```python
sessions = client.list_sessions()
```
## Datasets
The client is also useful for evaluating agents and LLMs on datasets. Check out the [Datasets](./datasets.md) guide for more information.

View File

@@ -1,35 +1,113 @@
# Locally Hosted Setup
# Locally Hosted Tracing Setup
This guide provides instructions for installing and setting up your environment to use the locally hosted version of the LangChain Plus tracing server. For instructions on a hosted tracing solution, please reference the [Hosted Tracing Setup](./hosted_installation.md) guide.
If you have docker running, the below snippet is all you need to run a tracing "Hello, World!". Otherwise, continue with the Installation instructions.
```bash
pip install -U "langchain[openai]"
langchain plus start
LANGCHAIN_TRACING_V2=true python -c "from langchain.chat_models import ChatOpenAI; print(ChatOpenAI().predict('Hello, world!'))"
```
This page contains instructions for installing and then setting up the environment to use the locally hosted version of tracing.
## Installation
1. Ensure you have Docker installed (see [Get Docker](https://docs.docker.com/get-docker/)) and that its running.
2. Install the latest version of `langchain`: `pip install langchain` or `pip install langchain -U` to upgrade your
existing version.
3. Run `langchain-server`. This command was installed automatically when you ran the above command (`pip install langchain`).
1. This will spin up the server in the terminal, hosted on port `4137` by default.
2. Once you see the terminal
output `langchain-langchain-frontend-1 | ➜ Local: [http://localhost:4173/](http://localhost:4173/)`, navigate
to [http://localhost:4173/](http://localhost:4173/)
1. Install the latest version of `langchain` by running the following command:
```bash
pip install -U langchain
```
2. Ensure Docker is installed and running on your system. To install Docker, refer to the [Get Docker](https://docs.docker.com/get-docker/) documentation.
3. Start the LangChain Plus tracing server by executing the following command in your terminal:
```bash
langchain plus start
```
_Note: The `langchain` command was installed when you installed the LangChain library using (`pip install langchain`)._
4. You should see a page with your tracing sessions. See the overview page for a walkthrough of the UI.
4. After the server has started, it will open the [Local UI](http://localhost). In the terminal, it will also display environment variables that you can configure to send your traces to the server. For more details on this, refer to the Environment Setup section below.
5. To stop the server, you can run the following command in your terminal:
```bash
langchain plus stop
```
5. Currently, trace data is not guaranteed to be persisted between runs of `langchain-server`. If you want to
persist your data, you can mount a volume to the Docker container. See the [Docker docs](https://docs.docker.com/storage/volumes/) for more info.
6. To stop the server, press `Ctrl+C` in the terminal where you ran `langchain-server`.
## Environment Configuration
With the LangChain Plus tracing server running, you can begin sending traces by setting the `LANGCHAIN_TRACING_V2` environment variable:
## Environment Setup
Here's an example of adding all relevant environment variables:
After installation, you must now set up your environment to use tracing.
This can be done by setting an environment variable in your terminal by running `export LANGCHAIN_HANDLER=langchain`.
You can also do this by adding the below snippet to the top of every script. **IMPORTANT:** this must go at the VERY TOP of your script, before you import anything from `langchain`.
```python
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
```bash
export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_SESSION="My Session Name" # Optional, otherwise, traces are logged to the "default" session
```
Or in python:
```python
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_SESSION"] = "My Session Name" # Optional, otherwise, traces are logged to the "default" session
```
## Tracing Context Manager
Although using environment variables is recommended for most tracing use cases, you can configure runs to be sent to a specific session using the context manager:
```python
from langchain.callbacks.manager import tracing_v2_enabled
with tracing_v2_enabled("My Session Name"):
...
```
## Connecting from a Remote Server
To connect to LangChainPlus when running applications on a remote server, such as a [Google Colab notebook](https://colab.research.google.com/), we offer two simple options:
1. Use our [hosted tracing](./hosted_installation.md) server.
2. Expose a public URL to your local tracing service.
Below are the full instructions to expose start a local LangChainPlus server and connect from a remote server:
1. Ensure Docker is installed and running on your system. To install Docker, refer to the [Get Docker](https://docs.docker.com/get-docker/) documentation.
2. Install the latest version of `langchain` by running the following command:
```bash
pip install -U langchain
```
3. Start the LangChain Plus tracing server and expose by executing the following command in your terminal:
```bash
langchain plus start --expose
```
Note: The `--expose` flag is required to expose your local server to the internet. By default, ngrok permits tunneling for up to 2 hours at a time. For longer sessions, you can make an [ngrok account](https://ngrok.com/) and use your auth token:
```bash
langchain plus start --expose --ngrok-authtoken "your auth token"
```
4. After the server has started, it will open the [Local LangChainPlus UI](http://localhost) a well as the [ngrok dashboard](http://0.0.0.0:4040/inspect/http). In the terminal, it will also display environment variables needed to send traces to the server via the tunnel URL. These will look something like the following:
```bash
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT=https://1234-01-23-45-678.ngrok.io
```
5. In your remote LangChain application, set the environment variables using the output from your terminal in the previous step:
```python
import os
os.environ["LANGCHAIN_TRACING_V2"] = True
os.environ["LANGCHAIN_ENDPOINT"] = "https://1234-01-23-45-678.ngrok.io" # Replace with your ngrok tunnel URL
```
6. Run your LangChain code and visualize the traces in the [LangChainPlus UI](http://localhost/sessions)
7. To stop the server, run the following command in your terminal:
```bash
langchain plus stop
```
## Congratulations!
Now that you've set up the tracing server, you can use it to debug, monitor, and evaluate your LangChain applications. What's next?
- For an overview of the LangChain Plus UI check out the [LangChain Tracing](../additional_resources/tracing.md) guide.
- For information on how to use your traces as datasets for testing and evaluation, check out the [Datasets](./datasets.md) guide.

View File

@@ -41,7 +41,7 @@ def get_docker_compose_command() -> List[str]:
"Neither 'docker compose' nor 'docker-compose'"
" commands are available. Please install the Docker"
" server following the instructions for your operating"
" system at https://docs.docker.com/engine/install/"
" system at https://docs.docker.com/get-docker/"
)
@@ -127,12 +127,13 @@ class PlusCommand:
]
)
logger.info(
"langchain plus server is running at http://localhost. To connect"
" locally, set the following environment variable"
" when running your LangChain application."
"The LangChain Plus server is running at http://localhost. To connect"
" locally, set the following environment variables"
" before running your LangChain application:\n"
)
logger.info("\tLANGCHAIN_TRACING_V2=true")
logger.info(f"\tLANGCHAIN_ENDPOINT=http://localhost:8000")
self._open_browser("http://localhost")
def _start_and_expose(self, auth_token: Optional[str]) -> None:
@@ -158,9 +159,10 @@ class PlusCommand:
)
ngrok_url = get_ngrok_url(auth_token)
logger.info(
"langchain plus server is running at http://localhost."
" To connect remotely, set the following environment"
" variable when running your LangChain application."
"The LangChain Plus server is running at http://localhost and"
f" exposed at URL {ngrok_url}. To connect remotely,"
" set the following environment variables"
" before running your LangChain application:\n"
)
logger.info("\tLANGCHAIN_TRACING_V2=true")
logger.info(f"\tLANGCHAIN_ENDPOINT={ngrok_url}")

View File

@@ -229,6 +229,7 @@ class LangChainPlusClient(BaseSettings):
*,
session_id: Optional[str] = None,
session_name: Optional[str] = None,
execution_order: Optional[int] = 1,
run_type: Optional[str] = None,
**kwargs: Any,
) -> List[Run]:
@@ -238,7 +239,10 @@ class LangChainPlusClient(BaseSettings):
raise ValueError("Only one of session_id or session_name may be given")
session_id = self.read_session(session_name=session_name).id
query_params = ListRunsQueryParams(
session_id=session_id, run_type=run_type, **kwargs
session_id=session_id,
run_type=run_type,
execution_order=execution_order,
**kwargs,
)
filtered_params = {
k: v for k, v in query_params.dict().items() if v is not None