docs: small Tableau docs update (#30827)

Description: small Tableau docs update
Issue: adds required environment variable
Dependencies: tableau-langchain

---------

Co-authored-by: Joe Constantino <joe.constantino@joecons-ltm6v86.internal.salesforce.com>
This commit is contained in:
Joey Constantino
2025-04-14 12:34:54 -07:00
committed by GitHub
parent f7c4965fb6
commit 2282762528

View File

@@ -98,7 +98,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "310d21b3",
"metadata": {},
"outputs": [],
@@ -125,7 +125,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "ccfb4159-34ac-4816-a8f0-795c5442c0b2",
"metadata": {},
"outputs": [],
@@ -148,16 +148,16 @@
" \"TABLEAU_JWT_SECRET\"\n",
") # a JWT secret ID (obtained through Tableau's admin UI)\n",
"tableau_api_version = \"3.21\" # the current Tableau REST API Version\n",
"tableau_user = \"joe.constantino@salesforce.com\" # replace with the username querying the target Tableau Data Source\n",
"tableau_user = \"joe.constantino@salesforce.com\" # enter the username querying the target Tableau Data Source\n",
"\n",
"# For this cookbook we are connecting to the Superstore dataset that comes by default with every Tableau server\n",
"datasource_luid = (\n",
" \"0965e61b-a072-43cf-994c-8c6cf526940d\" # the target data source for this Tool\n",
")\n",
"\n",
"model_provider = \"openai\" # the name of the model provider you are using for your Agent\n",
"# Add variables to control LLM models for the Agent and Tools\n",
"os.environ[\"OPENAI_API_KEY\"] # set an your model API key as an environment variable\n",
"tooling_llm_model = \"gpt-4o\""
"tooling_llm_model = \"gpt-4o-mini\""
]
},
{
@@ -178,7 +178,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"id": "72ee3eca",
"metadata": {},
"outputs": [],
@@ -194,6 +194,7 @@
" tableau_user=tableau_user,\n",
" datasource_luid=datasource_luid,\n",
" tooling_llm_model=tooling_llm_model,\n",
" model_provider=model_provider,\n",
")\n",
"\n",
"# load the List of Tools to be used by the Agent. In this case we will just load our data source Q&A tool.\n",
@@ -211,47 +212,14 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "06a1d3f7-79a8-452e-b37e-9070d15445b0",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"Here are the results for the states with the highest sales and profits based on the data queried:\n",
"\n",
"### States with the Most Sales\n",
"1. **California**: $457,687.63\n",
"2. **New York**: $310,876.27\n",
"3. **Texas**: $170,188.05\n",
"4. **Washington**: $138,641.27\n",
"5. **Pennsylvania**: $116,511.91\n",
"\n",
"### States with the Most Profit\n",
"1. **California**: $76,381.39\n",
"2. **New York**: $74,038.55\n",
"3. **Washington**: $33,402.65\n",
"4. **Michigan**: $24,463.19\n",
"5. **Virginia**: $18,597.95\n",
"\n",
"### Comparison\n",
"- **California** and **New York** are the only states that appear in both lists, indicating they are the top sellers and also generate the most profit.\n",
"- **Texas**, while having the third highest sales, does not rank in the top five for profit, showing a potential issue with profitability despite high sales.\n",
"\n",
"This analysis suggests that high sales do not always correlate with high profits, as seen with Texas."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"outputs": [],
"source": [
"from IPython.display import Markdown, display\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n",
"model = ChatOpenAI(model=\"gpt-4o\", temperature=0)\n",
"\n",
"tableauAgent = create_react_agent(model, tools)\n",
"\n",
@@ -261,13 +229,13 @@
" \"messages\": [\n",
" (\n",
" \"human\",\n",
" \"which states sell the most? Are those the same states with the most profits?\",\n",
" \"what's going on with table sales?\",\n",
" )\n",
" ]\n",
" }\n",
")\n",
"messages\n",
"# display(Markdown(messages['messages'][4].content)) #display a nicely formatted answer for successful generations"
"# display(Markdown(messages['messages'][3].content)) #display a nicely formatted answer for successful generations"
]
},
{
@@ -293,9 +261,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python (package_test_env)",
"display_name": "Python 3",
"language": "python",
"name": "package_test_env"
"name": "python3"
},
"language_info": {
"codemirror_mode": {