diff --git a/docs/docs/_templates/integration.mdx b/docs/docs/_templates/integration.mdx index 234c8cc09ee..5e686ad3fc1 100644 --- a/docs/docs/_templates/integration.mdx +++ b/docs/docs/_templates/integration.mdx @@ -37,7 +37,7 @@ from langchain_community.llms import integration_class_REPLACE_ME ## Text Embedding Models -See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME) +See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME). ```python from langchain_community.embeddings import integration_class_REPLACE_ME @@ -45,7 +45,7 @@ from langchain_community.embeddings import integration_class_REPLACE_ME ## Chat models -See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME) +See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME). ```python from langchain_community.chat_models import integration_class_REPLACE_ME diff --git a/docs/docs/guides/deployments/index.mdx b/docs/docs/guides/deployments/index.mdx index c075c3b92ee..cdebe6c311c 100644 --- a/docs/docs/guides/deployments/index.mdx +++ b/docs/docs/guides/deployments/index.mdx @@ -98,7 +98,7 @@ The LLM landscape is evolving at an unprecedented pace, with new libraries and m ### Model composition -Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feed back the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together. +Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feedback the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together. ## Cloud providers