{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Bittensor\n", "\n", ">[Bittensor](https://bittensor.com/) is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.\n", ">\n", ">`NIBittensorLLM` is developed by [Neural Internet](https://neuralinternet.ai/), powered by `Bittensor`.\n", "\n", ">This LLM showcases true potential of decentralized AI by giving you the best response(s) from the `Bittensor protocol`, which consist of various AI models such as `OpenAI`, `LLaMA2` etc.\n", "\n", "Users can view their logs, requests, and API keys on the [Validator Endpoint Frontend](https://api.neuralinternet.ai/). However, changes to the configuration are currently prohibited; otherwise, the user's queries will be blocked.\n", "\n", "If you encounter any difficulties or have any questions, please feel free to reach out to our developer on [GitHub](https://github.com/Kunj-2206), [Discord](https://discordapp.com/users/683542109248159777) or join our discord server for latest update and queries [Neural Internet](https://discord.gg/neuralinternet).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Different Parameter and response handling for NIBittensorLLM " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "from pprint import pprint\n", "\n", "from langchain.globals import set_debug\n", "from langchain_community.llms import NIBittensorLLM\n", "\n", "set_debug(True)\n", "\n", "# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with model\n", "llm_sys = NIBittensorLLM(\n", " system_prompt=\"Your task is to determine response based on user prompt.Explain me like I am technical lead of a project\"\n", ")\n", "sys_resp = llm_sys(\n", " \"What is bittensor and What are the potential benefits of decentralized AI?\"\n", ")\n", "print(f\"Response provided by LLM with system prompt set is : {sys_resp}\")\n", "\n", "# The top_responses parameter can give multiple responses based on its parameter value\n", "# This below code retrive top 10 miner's response all the response are in format of json\n", "\n", "# Json response structure is\n", "\"\"\" {\n", " \"choices\": [\n", " {\"index\": Bittensor's Metagraph index number,\n", " \"uid\": Unique Identifier of a miner,\n", " \"responder_hotkey\": Hotkey of a miner,\n", " \"message\":{\"role\":\"assistant\",\"content\": Contains actual response},\n", " \"response_ms\": Time in millisecond required to fetch response from a miner} \n", " ]\n", " } \"\"\"\n", "\n", "multi_response_llm = NIBittensorLLM(top_responses=10)\n", "multi_resp = multi_response_llm(\"What is Neural Network Feeding Mechanism?\")\n", "json_multi_resp = json.loads(multi_resp)\n", "pprint(json_multi_resp)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using NIBittensorLLM with LLMChain and PromptTemplate" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from langchain.chains import LLMChain\n", "from langchain.globals import set_debug\n", "from langchain_community.llms import NIBittensorLLM\n", "from langchain_core.prompts import PromptTemplate\n", "\n", "set_debug(True)\n", "\n", "template = \"\"\"Question: {question}\n", "\n", "Answer: Let's think step by step.\"\"\"\n", "\n", "\n", "prompt = PromptTemplate.from_template(template)\n", "\n", "# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with model\n", "llm = NIBittensorLLM(\n", " system_prompt=\"Your task is to determine response based on user prompt.\"\n", ")\n", "\n", "llm_chain = LLMChain(prompt=prompt, llm=llm)\n", "question = \"What is bittensor?\"\n", "\n", "llm_chain.run(question)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using NIBittensorLLM with Conversational Agent and Google Search Tool" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from langchain.tools import Tool\n", "from langchain_community.utilities import GoogleSearchAPIWrapper\n", "\n", "search = GoogleSearchAPIWrapper()\n", "\n", "tool = Tool(\n", " name=\"Google Search\",\n", " description=\"Search Google for recent results.\",\n", " func=search.run,\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from langchain import hub\n", "from langchain.agents import (\n", " AgentExecutor,\n", " create_react_agent,\n", ")\n", "from langchain.memory import ConversationBufferMemory\n", "from langchain_community.llms import NIBittensorLLM\n", "\n", "tools = [tool]\n", "\n", "prompt = hub.pull(\"hwchase17/react\")\n", "\n", "\n", "llm = NIBittensorLLM(\n", " system_prompt=\"Your task is to determine a response based on user prompt\"\n", ")\n", "\n", "memory = ConversationBufferMemory(memory_key=\"chat_history\")\n", "\n", "agent = create_react_agent(llm, tools, prompt)\n", "agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory)\n", "\n", "response = agent_executor.invoke({\"input\": prompt})" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 4 }