diff --git a/docs/docs/integrations/callbacks/confident.ipynb b/docs/docs/integrations/callbacks/confident.ipynb index e1fdc34c955..f98d529dd62 100644 --- a/docs/docs/integrations/callbacks/confident.ipynb +++ b/docs/docs/integrations/callbacks/confident.ipynb @@ -7,10 +7,7 @@ "source": [ "# Confident\n", "\n", - ">[DeepEval](https://confident-ai.com) package for unit testing LLMs.\n", - "> Using Confident, everyone can build robust language models through faster iterations\n", - "> using both unit testing and integration testing. We provide support for each step in the iteration\n", - "> from synthetic data creation to testing.\n" + ">[DeepEval](https://confident-ai.com) package for unit testing LLMs." ] }, { @@ -42,7 +39,7 @@ "metadata": {}, "outputs": [], "source": [ - "%pip install --upgrade --quiet langchain langchain-openai langchain-community deepeval langchain-chroma" + "!pip install deepeval langchain langchain-openai" ] }, { @@ -64,11 +61,29 @@ }, { "cell_type": "code", - "execution_count": 11, + "execution_count": null, "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "text/html": [ + "
ππ₯³ Congratulations! You've successfully logged in! π \n", + "\n" + ], + "text/plain": [ + "ππ₯³ Congratulations! You've successfully logged in! π \n" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], "source": [ - "!deepeval login" + "import os\n", + "import deepeval\n", + "\n", + "api_key = os.getenv(\"DEEPEVAL_API_KEY\")\n", + "deepeval.login(api_key)" ] }, { @@ -76,12 +91,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Setup DeepEval\n", + "### Setup Confident AI Callback (Modern)\n", "\n", - "You can, by default, use the `DeepEvalCallbackHandler` to set up the metrics you want to track. However, this has limited support for metrics at the moment (more to be added soon). It currently supports:\n", - "- [Answer Relevancy](https://docs.confident-ai.com/docs/measuring_llm_performance/answer_relevancy)\n", - "- [Bias](https://docs.confident-ai.com/docs/measuring_llm_performance/debias)\n", - "- [Toxicness](https://docs.confident-ai.com/docs/measuring_llm_performance/non_toxic)" + "The previous DeepEvalCallbackHandler and metric tracking are deprecated. Please use the new integration below." ] }, { @@ -90,10 +102,15 @@ "metadata": {}, "outputs": [], "source": [ - "from deepeval.metrics.answer_relevancy import AnswerRelevancy\n", + "from deepeval.integrations.langchain import CallbackHandler\n", "\n", - "# Here we want to make sure the answer is minimally relevant\n", - "answer_relevancy_metric = AnswerRelevancy(minimum_score=0.5)" + "handler = CallbackHandler(\n", + " name=\"My Trace\",\n", + " tags=[\"production\", \"v1\"],\n", + " metadata={\"experiment\": \"A/B\"},\n", + " thread_id=\"thread-123\",\n", + " user_id=\"user-456\",\n", + ")" ] }, { @@ -103,186 +120,11 @@ "source": [ "## Get Started" ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To use the `DeepEvalCallbackHandler`, we need the `implementation_name`. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from langchain_community.callbacks.confident_callback import DeepEvalCallbackHandler\n", - "\n", - "deepeval_callback = DeepEvalCallbackHandler(\n", - " implementation_name=\"langchainQuickstart\", metrics=[answer_relevancy_metric]\n", - ")" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Scenario 1: Feeding into LLM\n", - "\n", - "You can then feed it into your LLM with OpenAI." - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "LLMResult(generations=[[Generation(text='\\n\\nQ: What did the fish say when he hit the wall? \\nA: Dam.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nThe Moon \\n\\nThe moon is high in the midnight sky,\\nSparkling like a star above.\\nThe night so peaceful, so serene,\\nFilling up the air with love.\\n\\nEver changing and renewing,\\nA never-ending light of grace.\\nThe moon remains a constant view,\\nA reminder of lifeβs gentle pace.\\n\\nThrough time and space it guides us on,\\nA never-fading beacon of hope.\\nThe moon shines down on us all,\\nAs it continues to rise and elope.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nQ. What did one magnet say to the other magnet?\\nA. \"I find you very attractive!\"', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\"\\n\\nThe world is charged with the grandeur of God.\\nIt will flame out, like shining from shook foil;\\nIt gathers to a greatness, like the ooze of oil\\nCrushed. Why do men then now not reck his rod?\\n\\nGenerations have trod, have trod, have trod;\\nAnd all is seared with trade; bleared, smeared with toil;\\nAnd wears man's smudge and shares man's smell: the soil\\nIs bare now, nor can foot feel, being shod.\\n\\nAnd for all this, nature is never spent;\\nThere lives the dearest freshness deep down things;\\nAnd though the last lights off the black West went\\nOh, morning, at the brown brink eastward, springs β\\n\\nBecause the Holy Ghost over the bent\\nWorld broods with warm breast and with ah! bright wings.\\n\\n~Gerard Manley Hopkins\", generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nQ: What did one ocean say to the other ocean?\\nA: Nothing, they just waved.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text=\"\\n\\nA poem for you\\n\\nOn a field of green\\n\\nThe sky so blue\\n\\nA gentle breeze, the sun above\\n\\nA beautiful world, for us to love\\n\\nLife is a journey, full of surprise\\n\\nFull of joy and full of surprise\\n\\nBe brave and take small steps\\n\\nThe future will be revealed with depth\\n\\nIn the morning, when dawn arrives\\n\\nA fresh start, no reason to hide\\n\\nSomewhere down the road, there's a heart that beats\\n\\nBelieve in yourself, you'll always succeed.\", generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 504, 'total_tokens': 528, 'prompt_tokens': 24}, 'model_name': 'text-davinci-003'})" - ] - }, - "execution_count": 7, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from langchain_openai import OpenAI\n", - "\n", - "llm = OpenAI(\n", - " temperature=0,\n", - " callbacks=[deepeval_callback],\n", - " verbose=True,\n", - " openai_api_key=\"