Implement async support for Cohere (#8237)

This PR introduces async API support for Cohere, both LLM and
embeddings. It requires updating `cohere` package to `^4`.

Tagging @hwchase17, @baskaryan, @agola11

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
This commit is contained in:
Kacper Łukawski
2023-07-27 00:51:18 +02:00
committed by GitHub
parent bf1357f584
commit c5988c1d4b
5 changed files with 159 additions and 30 deletions

View File

@@ -9,7 +9,7 @@
"\n",
"LangChain provides async support for LLMs by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
"\n",
"Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, `OpenAI`, `PromptLayerOpenAI`, `ChatOpenAI` and `Anthropic` are supported, but async support for other LLMs is on the roadmap.\n",
"Async support is particularly useful for calling multiple LLMs concurrently, as these calls are network-bound. Currently, `OpenAI`, `PromptLayerOpenAI`, `ChatOpenAI`, `Anthropic` and `Cohere` are supported, but async support for other LLMs is on the roadmap.\n",
"\n",
"You can use the `agenerate` method to call an OpenAI LLM asynchronously."
]
@@ -56,7 +56,7 @@
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
"\u001b[1mConcurrent executed in 1.39 seconds.\u001b[0m\n",
"\u001B[1mConcurrent executed in 1.39 seconds.\u001B[0m\n",
"\n",
"\n",
"I'm doing well, thank you. How about you?\n",
@@ -86,7 +86,7 @@
"\n",
"\n",
"I'm doing well, thanks for asking. How about you?\n",
"\u001b[1mSerial executed in 5.77 seconds.\u001b[0m\n"
"\u001B[1mSerial executed in 5.77 seconds.\u001B[0m\n"
]
}
],