mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-15 06:26:12 +00:00
community[minor]: Unify Titan Takeoff Integrations and Adding Embedding Support (#18775)
**Community: Unify Titan Takeoff Integrations and Adding Embedding Support** **Description:** Titan Takeoff no longer reflects this either of the integrations in the community folder. The two integrations (TitanTakeoffPro and TitanTakeoff) where causing confusion with clients, so have moved code into one place and created an alias for backwards compatibility. Added Takeoff Client python package to do the bulk of the work with the requests, this is because this package is actively updated with new versions of Takeoff. So this integration will be far more robust and will not degrade as badly over time. **Issue:** Fixes bugs in the old Titan integrations and unified the code with added unit test converge to avoid future problems. **Dependencies:** Added optional dependency takeoff-client, all imports still work without dependency including the Titan Takeoff classes but just will fail on initialisation if not pip installed takeoff-client **Twitter** @MeryemArik9 Thanks all :) --------- Co-authored-by: Bagatur <baskaryan@gmail.com> Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
This commit is contained in:
@@ -1,75 +1,15 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Titan Takeoff\n",
|
||||
"\n",
|
||||
">`TitanML` helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform. \n",
|
||||
"`TitanML` helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.\n",
|
||||
"\n",
|
||||
">Our inference server, [Titan Takeoff](https://docs.titanml.co/docs/titan-takeoff/getting-started) enables deployment of LLMs locally on your hardware in a single command. Most generative model architectures are supported, such as Falcon, Llama 2, GPT2, T5 and many more."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Installation\n",
|
||||
"\n",
|
||||
"To get started with Iris Takeoff, all you need is to have docker and python installed on your local system. If you wish to use the server with gpu support, then you will need to install docker with cuda support.\n",
|
||||
"\n",
|
||||
"For Mac and Windows users, make sure you have the docker daemon running! You can check this by running docker ps in your terminal. To start the daemon, open the docker desktop app.\n",
|
||||
"\n",
|
||||
"Run the following command to install the Iris CLI that will enable you to run the takeoff server:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"vscode": {
|
||||
"languageId": "shellscript"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet titan-iris"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Choose a Model\n",
|
||||
"Takeoff supports many of the most powerful generative text models, such as Falcon, MPT, and Llama. See the [supported models](https://docs.titanml.co/docs/titan-takeoff/supported-models) for more information. For information about using your own models, see the [custom models](https://docs.titanml.co/docs/titan-takeoff/Advanced/custom-models).\n",
|
||||
"\n",
|
||||
"Going forward in this demo we will be using the falcon 7B instruct model. This is a good open-source model that is trained to follow instructions, and is small enough to easily inference even on CPUs.\n",
|
||||
"\n",
|
||||
"## Taking off\n",
|
||||
"Models are referred to by their model id on HuggingFace. Takeoff uses port 8000 by default, but can be configured to use another port. There is also support to use a Nvidia GPU by specifying cuda for the device flag.\n",
|
||||
"\n",
|
||||
"To start the takeoff server, run:\n",
|
||||
"\n",
|
||||
"```shell\n",
|
||||
"iris takeoff --model tiiuae/falcon-7b-instruct --device cpu\n",
|
||||
"iris takeoff --model tiiuae/falcon-7b-instruct --device cuda # Nvidia GPU required\n",
|
||||
"iris takeoff --model tiiuae/falcon-7b-instruct --device cpu --port 5000 # run on port 5000 (default: 8000)\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You will then be directed to a login page, where you will need to create an account to proceed.\n",
|
||||
"After logging in, run the command onscreen to check whether the server is ready. When it is ready, you can start using the Takeoff integration.\n",
|
||||
"\n",
|
||||
"To shutdown the server, run the following command. You will be presented with options on which Takeoff server to shut down, in case you have multiple running servers.\n",
|
||||
"\n",
|
||||
"```shell\n",
|
||||
"iris takeoff --shutdown # shutdown the server\n",
|
||||
"```"
|
||||
"Our inference server, [Titan Takeoff](https://docs.titanml.co/docs/intro) enables deployment of LLMs locally on your hardware in a single command. Most generative model architectures are supported, such as Falcon, Llama 2, GPT2, T5 and many more. If you experience trouble with a specific model, please let us know at hello@titanml.co."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -77,8 +17,8 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Inferencing your model\n",
|
||||
"To access your LLM, use the TitanTakeoff LLM wrapper:"
|
||||
"## Example usage\n",
|
||||
"Here are some helpful examples to get started using Titan Takeoff Server. You need to make sure Takeoff Server has been started in the background before running these commands. For more information see [docs page for launching Takeoff](https://docs.titanml.co/docs/Docs/launching/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -87,50 +27,23 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.llms import TitanTakeoff\n",
|
||||
"import time\n",
|
||||
"\n",
|
||||
"llm = TitanTakeoff(\n",
|
||||
" base_url=\"http://localhost:8000\", generate_max_length=128, temperature=1.0\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"prompt = \"What is the largest planet in the solar system?\"\n",
|
||||
"\n",
|
||||
"llm(prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"No parameters are needed by default, but a baseURL that points to your desired URL where Takeoff is running can be specified and [generation parameters](https://docs.titanml.co/docs/titan-takeoff/Advanced/generation-parameters) can be supplied.\n",
|
||||
"\n",
|
||||
"### Streaming\n",
|
||||
"Streaming is also supported via the streaming flag:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks.manager import CallbackManager\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"llm = TitanTakeoff(\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), streaming=True\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"prompt = \"What is the capital of France?\"\n",
|
||||
"\n",
|
||||
"llm(prompt)"
|
||||
"# Note importing TitanTakeoffPro instead of TitanTakeoff will work as well both use same object under the hood\n",
|
||||
"from langchain_community.llms import TitanTakeoff"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Integration with LLMChain"
|
||||
"### Example 1\n",
|
||||
"\n",
|
||||
"Basic use assuming Takeoff is running on your machine using its default ports (ie localhost:3000).\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -139,19 +52,144 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain_core.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"llm = TitanTakeoff()\n",
|
||||
"output = llm.invoke(\"What is the weather in London in August?\")\n",
|
||||
"print(output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example 2\n",
|
||||
"\n",
|
||||
"template = \"What is the capital of {country}\"\n",
|
||||
"Specifying a port and other generation parameters"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = TitanTakeoff(port=3000)\n",
|
||||
"# A comprehensive list of parameters can be found at https://docs.titanml.co/docs/next/apis/Takeoff%20inference_REST_API/generate#request\n",
|
||||
"output = llm.invoke(\n",
|
||||
" \"What is the largest rainforest in the world?\",\n",
|
||||
" consumer_group=\"primary\",\n",
|
||||
" min_new_tokens=128,\n",
|
||||
" max_new_tokens=512,\n",
|
||||
" no_repeat_ngram_size=2,\n",
|
||||
" sampling_topk=1,\n",
|
||||
" sampling_topp=1.0,\n",
|
||||
" sampling_temperature=1.0,\n",
|
||||
" repetition_penalty=1.0,\n",
|
||||
" regex_string=\"\",\n",
|
||||
" json_schema=None,\n",
|
||||
")\n",
|
||||
"print(output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example 3\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate.from_template(template)\n",
|
||||
"Using generate for multiple inputs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = TitanTakeoff()\n",
|
||||
"rich_output = llm.generate([\"What is Deep Learning?\", \"What is Machine Learning?\"])\n",
|
||||
"print(rich_output.generations)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example 4\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
|
||||
"Streaming output"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = TitanTakeoff(\n",
|
||||
" streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])\n",
|
||||
")\n",
|
||||
"prompt = \"What is the capital of France?\"\n",
|
||||
"output = llm.invoke(prompt)\n",
|
||||
"print(output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example 5\n",
|
||||
"\n",
|
||||
"generated = llm_chain.run(country=\"Belgium\")\n",
|
||||
"print(generated)"
|
||||
"Using LCEL"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = TitanTakeoff()\n",
|
||||
"prompt = PromptTemplate.from_template(\"Tell me about {topic}\")\n",
|
||||
"chain = prompt | llm\n",
|
||||
"output = chain.invoke({\"topic\": \"the universe\"})\n",
|
||||
"print(output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example 6\n",
|
||||
"\n",
|
||||
"Starting readers using TitanTakeoff Python Wrapper. If you haven't created any readers with first launching Takeoff, or you want to add another you can do so when you initialize the TitanTakeoff object. Just pass a list of model configs you want to start as the `models` parameter."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Model config for the llama model, where you can specify the following parameters:\n",
|
||||
"# model_name (str): The name of the model to use\n",
|
||||
"# device: (str): The device to use for inference, cuda or cpu\n",
|
||||
"# consumer_group (str): The consumer group to place the reader into\n",
|
||||
"# tensor_parallel (Optional[int]): The number of gpus you would like your model to be split across\n",
|
||||
"# max_seq_length (int): The maximum sequence length to use for inference, defaults to 512\n",
|
||||
"# max_batch_size (int_: The max batch size for continuous batching of requests\n",
|
||||
"llama_model = {\n",
|
||||
" \"model_name\": \"TheBloke/Llama-2-7b-Chat-AWQ\",\n",
|
||||
" \"device\": \"cuda\",\n",
|
||||
" \"consumer_group\": \"llama\",\n",
|
||||
"}\n",
|
||||
"llm = TitanTakeoff(models=[llama_model])\n",
|
||||
"\n",
|
||||
"# The model needs time to spin up, length of time need will depend on the size of model and your network connection speed\n",
|
||||
"time.sleep(60)\n",
|
||||
"\n",
|
||||
"prompt = \"What is the capital of France?\"\n",
|
||||
"output = llm.invoke(prompt, consumer_group=\"llama\")\n",
|
||||
"print(output)"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
@@ -1,102 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Titan Takeoff Pro\n",
|
||||
"\n",
|
||||
"`TitanML` helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.\n",
|
||||
"\n",
|
||||
">Note: These docs are for the Pro version of Titan Takeoff. For the community version, see the page for Titan Takeoff.\n",
|
||||
"\n",
|
||||
"Our inference server, [Titan Takeoff (Pro Version)](https://docs.titanml.co/docs/titan-takeoff/pro-features/feature-comparison) enables deployment of LLMs locally on your hardware in a single command. Most generative model architectures are supported, such as Falcon, Llama 2, GPT2, T5 and many more."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Example usage\n",
|
||||
"Here are some helpful examples to get started using the Pro version of Titan Takeoff Server.\n",
|
||||
"No parameters are needed by default, but a baseURL that points to your desired URL where Takeoff is running can be specified and generation parameters can be supplied."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks.manager import CallbackManager\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"from langchain_community.llms import TitanTakeoffPro\n",
|
||||
"from langchain_core.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"# Example 1: Basic use\n",
|
||||
"llm = TitanTakeoffPro()\n",
|
||||
"output = llm(\"What is the weather in London in August?\")\n",
|
||||
"print(output)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Example 2: Specifying a port and other generation parameters\n",
|
||||
"llm = TitanTakeoffPro(\n",
|
||||
" base_url=\"http://localhost:3000\",\n",
|
||||
" min_new_tokens=128,\n",
|
||||
" max_new_tokens=512,\n",
|
||||
" no_repeat_ngram_size=2,\n",
|
||||
" sampling_topk=1,\n",
|
||||
" sampling_topp=1.0,\n",
|
||||
" sampling_temperature=1.0,\n",
|
||||
" repetition_penalty=1.0,\n",
|
||||
" regex_string=\"\",\n",
|
||||
")\n",
|
||||
"output = llm(\"What is the largest rainforest in the world?\")\n",
|
||||
"print(output)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Example 3: Using generate for multiple inputs\n",
|
||||
"llm = TitanTakeoffPro()\n",
|
||||
"rich_output = llm.generate([\"What is Deep Learning?\", \"What is Machine Learning?\"])\n",
|
||||
"print(rich_output.generations)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Example 4: Streaming output\n",
|
||||
"llm = TitanTakeoffPro(\n",
|
||||
" streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])\n",
|
||||
")\n",
|
||||
"prompt = \"What is the capital of France?\"\n",
|
||||
"llm(prompt)\n",
|
||||
"\n",
|
||||
"# Example 5: Using LCEL\n",
|
||||
"llm = TitanTakeoffPro()\n",
|
||||
"prompt = PromptTemplate.from_template(\"Tell me about {topic}\")\n",
|
||||
"chain = prompt | llm\n",
|
||||
"chain.invoke({\"topic\": \"the universe\"})"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
112
docs/docs/integrations/text_embedding/titan_takeoff.ipynb
Normal file
112
docs/docs/integrations/text_embedding/titan_takeoff.ipynb
Normal file
@@ -0,0 +1,112 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Titan Takeoff\n",
|
||||
"\n",
|
||||
"`TitanML` helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.\n",
|
||||
"\n",
|
||||
"Our inference server, [Titan Takeoff](https://docs.titanml.co/docs/intro) enables deployment of LLMs locally on your hardware in a single command. Most embedding models are supported out of the box, if you experience trouble with a specific model, please let us know at hello@titanml.co."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Example usage\n",
|
||||
"Here are some helpful examples to get started using Titan Takeoff Server. You need to make sure Takeoff Server has been started in the background before running these commands. For more information see [docs page for launching Takeoff](https://docs.titanml.co/docs/Docs/launching/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import time\n",
|
||||
"\n",
|
||||
"from langchain_community.embeddings import TitanTakeoffEmbed"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example 1\n",
|
||||
"Basic use assuming Takeoff is running on your machine using its default ports (ie localhost:3000)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"embed = TitanTakeoffEmbed()\n",
|
||||
"output = embed.embed_query(\n",
|
||||
" \"What is the weather in London in August?\", consumer_group=\"embed\"\n",
|
||||
")\n",
|
||||
"print(output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Example 2 \n",
|
||||
"Starting readers using TitanTakeoffEmbed Python Wrapper. If you haven't created any readers with first launching Takeoff, or you want to add another you can do so when you initialize the TitanTakeoffEmbed object. Just pass a list of models you want to start as the `models` parameter.\n",
|
||||
"\n",
|
||||
"You can use `embed.query_documents` to embed multiple documents at once. The expected input is a list of strings, rather than just a string expected for the `embed_query` method."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Model config for the embedding model, where you can specify the following parameters:\n",
|
||||
"# model_name (str): The name of the model to use\n",
|
||||
"# device: (str): The device to use for inference, cuda or cpu\n",
|
||||
"# consumer_group (str): The consumer group to place the reader into\n",
|
||||
"embedding_model = {\n",
|
||||
" \"model_name\": \"BAAI/bge-large-en-v1.5\",\n",
|
||||
" \"device\": \"cpu\",\n",
|
||||
" \"consumer_group\": \"embed\",\n",
|
||||
"}\n",
|
||||
"embed = TitanTakeoffEmbed(models=[embedding_model])\n",
|
||||
"\n",
|
||||
"# The model needs time to spin up, length of time need will depend on the size of model and your network connection speed\n",
|
||||
"time.sleep(60)\n",
|
||||
"\n",
|
||||
"prompt = \"What is the capital of France?\"\n",
|
||||
"# We specified \"embed\" consumer group so need to send request to the same consumer group so it hits our embedding model and not others\n",
|
||||
"output = embed.embed_query(prompt, consumer_group=\"embed\")\n",
|
||||
"print(output)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "langchain",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
@@ -1,6 +1,10 @@
|
||||
{
|
||||
"trailingSlash": true,
|
||||
"redirects": [
|
||||
{
|
||||
"source": "/docs/integrations/llms/titan_takeoff_pro",
|
||||
"destination": "/docs/integrations/llms/titan_takeoff"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/providers/optimum_intel(/?)",
|
||||
"destination": "/docs/integrations/providers/intel/"
|
||||
|
Reference in New Issue
Block a user