From 03a9af7668f3b5194e118356935d0926b04e5176 Mon Sep 17 00:00:00 2001 From: mike Date: Wed, 1 Jan 2025 16:17:03 +0800 Subject: [PATCH] Initial Commit On JP translation --- gpt4all-chat/translations/gpt4all_jp_JP.ts | 2860 ++++++++++++++++++++ 1 file changed, 2860 insertions(+) create mode 100644 gpt4all-chat/translations/gpt4all_jp_JP.ts diff --git a/gpt4all-chat/translations/gpt4all_jp_JP.ts b/gpt4all-chat/translations/gpt4all_jp_JP.ts new file mode 100644 index 00000000..4c240ea1 --- /dev/null +++ b/gpt4all-chat/translations/gpt4all_jp_JP.ts @@ -0,0 +1,2860 @@ + + + + + AddCollectionView + + + ← Existing Collections + 既存のコレクション + + + + Add Document Collection + ドキュメントコレクションを追加 + + + + Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings. + プレーンテキスト ファイル、PDF、または Markdown を含むフォルダーを追加します。[設定] で追加の拡張機能を構成します。J + + + + Name + 名前 + + + + Collection name... + コレクション名... + + + + Name of the collection to add (Required) + 追加するコレクションの名前(必須) + + + + Folder + フォルダ + + + + Folder path... + フォルダー パス... + + + + Folder path to documents (Required) + ファイルのフォルダー パス (必須) + + + + Browse + ブラウズ + + + + Create Collection + コレクションの作成 + + + + AddGPT4AllModelView + + + These models have been specifically configured for use in GPT4All. The first few models on the list are known to work the best, but you should only attempt to use models that will fit in your available memory. + これらのモデルは、GPT4All で使用するために特別に構成されています。リストの最初のいくつかのモデルが最もよく機能することがわかっていますが、使用可能なメモリに収まるモデルのみを使用するようにしてください。 + + + + Network error: could not retrieve %1 + ネットワーク エラー: %1 を取得できませんでした + + + + + Busy indicator + ビジーインジケーター + + + + Displayed when the models request is ongoing + モデルリクエストが進行中の場合に表示されます + + + + Model file + モデルファイル + + + + Model file to be downloaded + ダウンロードするモデルファイル + + + + Description + 説明 + + + + File description + ファイルの説明 + + + + Cancel + キャンセル + + + + Resume + 再開する + + + + Download + ダウンロード + + + + Stop/restart/start the download + ダウンロードを停止/再開/開始する + + + + Remove + 取り除く + + + + Remove model from filesystem + ファイルシステムからモデルを削除する + + + + + Install + インストール + + + + Install online model + オンラインモデルをインストールする + + + + <strong><font size="1"><a href="#error">Error</a></strong></font> + <strong><font size="1"><a href="#error">エラー</a></strong></font> + + + + Describes an error that occurred when downloading + ダウンロード中に発生したエラーについて説明します + + + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> + <strong><font size="2">警告: お使いのハードウェアには推奨されません。モデルには、システムで使用可能なメモリ (%2) よりも多くのメモリ (%1 GB) が必要です。</strong></font> + + + + Error for incompatible hardware + 互換性のないハードウェアのエラー + + + + Download progressBar + ダウンロードの進行状況バー + + + + Shows the progress made in the download + ダウンロードの進行状況を表示します + + + + Download speed + ダウンロード速度 + + + + Download speed in bytes/kilobytes/megabytes per second + ダウンロード速度 (バイト/キロバイト/メガバイト/秒) + + + + Calculating... + 計算中... + + + + + + + Whether the file hash is being calculated + + + + + Displayed when the file hash is being calculated + + + + + ERROR: $API_KEY is empty. + + + + + enter $API_KEY + + + + + ERROR: $BASE_URL is empty. + + + + + enter $BASE_URL + + + + + ERROR: $MODEL_NAME is empty. + + + + + enter $MODEL_NAME + + + + + File size + + + + + RAM required + 必要なRAM + + + + %1 GB + + + + + + ? + ? + + + + Parameters + + + + + Quant + + + + + Type + タイプ + + + + AddHFModelView + + + Use the search to find and download models from HuggingFace. There is NO GUARANTEE that these will work. Many will require additional configuration before they can be used. + + + + + Discover and download models by keyword search... + + + + + Text field for discovering and filtering downloadable models + + + + + Searching · %1 + + + + + Initiate model discovery and filtering + + + + + Triggers discovery and filtering of models + + + + + Default + + + + + Likes + + + + + Downloads + + + + + Recent + + + + + Sort by: %1 + + + + + Asc + + + + + Desc + + + + + Sort dir: %1 + + + + + None + + + + + Limit: %1 + + + + + Model file + + + + + Model file to be downloaded + + + + + Description + + + + + File description + + + + + Cancel + + + + + Resume + + + + + Download + + + + + Stop/restart/start the download + + + + + Remove + + + + + Remove model from filesystem + + + + + + Install + + + + + Install online model + + + + + <strong><font size="1"><a href="#error">Error</a></strong></font> + + + + + Describes an error that occurred when downloading + + + + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> + + + + + Error for incompatible hardware + + + + + Download progressBar + + + + + Shows the progress made in the download + + + + + Download speed + + + + + Download speed in bytes/kilobytes/megabytes per second + + + + + Calculating... + + + + + + + + Whether the file hash is being calculated + + + + + Busy indicator + + + + + Displayed when the file hash is being calculated + + + + + ERROR: $API_KEY is empty. + + + + + enter $API_KEY + + + + + ERROR: $BASE_URL is empty. + + + + + enter $BASE_URL + + + + + ERROR: $MODEL_NAME is empty. + + + + + enter $MODEL_NAME + + + + + File size + + + + + Quant + + + + + Type + + + + + AddModelView + + + ← Existing Models + + + + + Explore Models + + + + + GPT4All + + + + + HuggingFace + + + + + ApplicationSettings + + + Application + + + + + Network dialog + + + + + opt-in to share feedback/conversations + + + + + Error dialog + + + + + Application Settings + + + + + General + + + + + Theme + + + + + The application color scheme. + + + + + Dark + + + + + Light + + + + + ERROR: Update system could not find the MaintenanceTool used to check for updates!<br/><br/>Did you install this application using the online installer? If so, the MaintenanceTool executable should be located one directory above where this application resides on your filesystem.<br/><br/>If you can't start it manually, then I'm afraid you'll have to reinstall. + + + + + LegacyDark + + + + + Font Size + + + + + The size of text in the application. + + + + + Small + + + + + Medium + + + + + Large + + + + + Language and Locale + + + + + The language and locale you wish to use. + + + + + System Locale + + + + + Device + + + + + The compute device used for text generation. + + + + + + Application default + + + + + Default Model + + + + + The preferred model for new chats. Also used as the local server fallback. + + + + + Suggestion Mode + + + + + Generate suggested follow-up questions at the end of responses. + + + + + When chatting with LocalDocs + + + + + Whenever possible + + + + + Never + + + + + Download Path + + + + + Where to store local models and the LocalDocs database. + + + + + Browse + + + + + Choose where to save model files + + + + + Enable Datalake + + + + + Send chats and feedback to the GPT4All Open-Source Datalake. + + + + + Advanced + + + + + CPU Threads + + + + + The number of CPU threads used for inference and embedding. + + + + + Enable System Tray + + + + + The application will minimize to the system tray when the window is closed. + + + + + Enable Local API Server + + + + + Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage. + + + + + API Server Port + + + + + The port to use for the local server. Requires restart. + + + + + Check For Updates + + + + + Manually check for an update to GPT4All. + + + + + Updates + + + + + Chat + + + + New Chat + + + + + Server Chat + + + + + ChatAPIWorker + + + ERROR: Network error occurred while connecting to the API server + + + + + ChatAPIWorker::handleFinished got HTTP Error %1 %2 + + + + + ChatDrawer + + + Drawer + + + + + Main navigation drawer + + + + + + New Chat + + + + + Create a new chat + + + + + Select the current chat or edit the chat when in edit mode + + + + + Edit chat name + + + + + Save chat name + + + + + Delete chat + + + + + Confirm chat deletion + + + + + Cancel chat deletion + + + + + List of chats + + + + + List of chats in the drawer dialog + + + + + ChatItemView + + + GPT4All + + + + + You + + + + + response stopped ... + + + + + retrieving localdocs: %1 ... + + + + + searching localdocs: %1 ... + + + + + processing ... + + + + + generating response ... + + + + + generating questions ... + + + + + + Copy + + + + + Copy Message + + + + + Disable markdown + + + + + Enable markdown + + + + + %n Source(s) + + %n Source + + + + + LocalDocs + + + + + Edit this message? + + + + + + All following messages will be permanently erased. + + + + + Redo this response? + + + + + Cannot edit chat without a loaded model. + + + + + Cannot edit chat while the model is generating. + + + + + Edit + + + + + Cannot redo response without a loaded model. + + + + + Cannot redo response while the model is generating. + + + + + Redo + + + + + Like response + + + + + Dislike response + + + + + Suggested follow-ups + + + + + ChatLLM + + + Your message was too long and could not be processed (%1 > %2). Please try again with something shorter. + + + + + ChatListModel + + + TODAY + + + + + THIS WEEK + + + + + THIS MONTH + + + + + LAST SIX MONTHS + + + + + THIS YEAR + + + + + LAST YEAR + + + + + ChatView + + + <h3>Warning</h3><p>%1</p> + + + + + Conversation copied to clipboard. + + + + + Code copied to clipboard. + + + + + The entire chat will be erased. + + + + + Chat panel + + + + + Chat panel with options + + + + + Reload the currently loaded model + + + + + Eject the currently loaded model + + + + + No model installed. + + + + + Model loading error. + + + + + Waiting for model... + + + + + Switching context... + + + + + Choose a model... + + + + + Not found: %1 + + + + + The top item is the current model + + + + + LocalDocs + + + + + Add documents + + + + + add collections of documents to the chat + + + + + Load the default model + + + + + Loads the default model which can be changed in settings + + + + + No Model Installed + + + + + GPT4All requires that you install at least one +model to get started + + + + + Install a Model + + + + + Shows the add model view + + + + + Conversation with the model + + + + + prompt / response pairs from the conversation + + + + + Legacy prompt template needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + Legacy system prompt needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + + + Copy + + + + %n Source(s) + + %n Source + + + + + Erase and reset chat session + + + + + Copy chat session to clipboard + + + + + Add media + + + + + Adds media to the prompt + + + + + Stop generating + + + + + Stop the current response generation + + + + + Attach + + + + + Single File + + + + + Reloads the model + + + + + <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help + + + + + + Erase conversation? + + + + + Changing the model will erase the current conversation. + + + + + + Reload · %1 + + + + + Loading · %1 + + + + + Load · %1 (default) → + + + + + Send a message... + + + + + Load a model to continue... + + + + + Send messages/prompts to the model + + + + + Cut + + + + + Paste + + + + + Select All + + + + + Send message + + + + + Sends the message/prompt contained in textfield to the model + + + + + CollectionsDrawer + + + Warning: searching collections while indexing can return incomplete results + + + + + %n file(s) + + %n file + + + + + %n word(s) + + %n word + + + + + Updating + + + + + + Add Docs + + + + + Select a collection to make it available to the chat model. + + + + + ConfirmationDialog + + + OK + + + + + Cancel + + + + + Download + + + Model "%1" is installed successfully. + + + + + ERROR: $MODEL_NAME is empty. + + + + + ERROR: $API_KEY is empty. + + + + + ERROR: $BASE_URL is invalid. + + + + + ERROR: Model "%1 (%2)" is conflict. + + + + + Model "%1 (%2)" is installed successfully. + + + + + Model "%1" is removed. + + + + + HomeView + + + Welcome to GPT4All + + + + + The privacy-first LLM chat application + + + + + Start chatting + + + + + Start Chatting + + + + + Chat with any LLM + + + + + LocalDocs + + + + + Chat with your local files + + + + + Find Models + + + + + Explore and download models + + + + + Latest news + + + + + Latest news from GPT4All + + + + + Release Notes + + + + + Documentation + + + + + Discord + + + + + X (Twitter) + + + + + Github + + + + + nomic.ai + + + + + Subscribe to Newsletter + + + + + LocalDocsSettings + + + LocalDocs + + + + + LocalDocs Settings + + + + + Indexing + + + + + Allowed File Extensions + + + + + Comma-separated list. LocalDocs will only attempt to process files with these extensions. + + + + + Embedding + + + + + Use Nomic Embed API + + + + + Embed documents using the fast Nomic API instead of a private local model. Requires restart. + + + + + Nomic API Key + + + + + API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart. + + + + + Embeddings Device + + + + + The compute device used for embeddings. Requires restart. + + + + + Application default + + + + + Display + + + + + Show Sources + + + + + Display the sources used for each response. + + + + + Advanced + + + + + Warning: Advanced usage only. + + + + + Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. + + + + + Document snippet size (characters) + + + + + Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. + + + + + Max document snippets per prompt + + + + + Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. + + + + + LocalDocsView + + + LocalDocs + + + + + Chat with your local files + + + + + + Add Collection + + + + + <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however. + + + + + No Collections Installed + + + + + Install a collection of local documents to get started using this feature + + + + + + Add Doc Collection + + + + + Shows the add model view + + + + + Indexing progressBar + + + + + Shows the progress made in the indexing + + + + + ERROR + + + + + INDEXING + + + + + EMBEDDING + + + + + REQUIRES UPDATE + + + + + READY + + + + + INSTALLING + + + + + Indexing in progress + + + + + Embedding in progress + + + + + This collection requires an update after version change + + + + + Automatically reindexes upon changes to the folder + + + + + Installation in progress + + + + + % + + + + + %n file(s) + + %n file + + + + + %n word(s) + + %n word + + + + + Remove + + + + + Rebuild + + + + + Reindex this folder from scratch. This is slow and usually not needed. + + + + + Update + + + + + Update the collection to the new version. This is a slow operation. + + + + + ModelList + + + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> + + + + + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 + + + + + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 + + + + + <strong>Mistral Tiny model</strong><br> %1 + + + + + <strong>Mistral Small model</strong><br> %1 + + + + + <strong>Mistral Medium model</strong><br> %1 + + + + + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. + + + + + + cannot open "%1": %2 + + + + + cannot create "%1": %2 + + + + + %1 (%2) + + + + + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> + + + + + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> + + + + + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> + + + + + <strong>Connect to OpenAI-compatible API server</strong><br> %1 + + + + + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> + + + + + ModelSettings + + + Model + + + + + %1 system message? + + + + + + Clear + + + + + + Reset + + + + + The system message will be %1. + + + + + removed + + + + + + reset to the default + + + + + %1 chat template? + + + + + The chat template will be %1. + + + + + erased + + + + + Model Settings + + + + + Clone + + + + + Remove + + + + + Name + + + + + Model File + + + + + System Message + + + + + A message to set the context or guide the behavior of the model. Leave blank for none. NOTE: Since GPT4All 3.5, this should not contain control tokens. + + + + + System message is not <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">plain text</a>. + + + + + Chat Template + + + + + This Jinja template turns the chat into input for the model. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Syntax error</a>: %1 + + + + + Chat template is not in <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Jinja format</a>. + + + + + Chat Name Prompt + + + + + Prompt used to automatically generate chat names. + + + + + Suggested FollowUp Prompt + + + + + Prompt used to generate suggested follow-up questions. + + + + + Context Length + + + + + Number of input and output tokens the model sees. + + + + + Maximum combined prompt/response tokens before information is lost. +Using more context than the model was trained on will yield poor results. +NOTE: Does not take effect until you reload the model. + + + + + Temperature + + + + + Randomness of model output. Higher -> more variation. + + + + + Temperature increases the chances of choosing less likely tokens. +NOTE: Higher temperature gives more creative but less predictable outputs. + + + + + Top-P + + + + + Nucleus Sampling factor. Lower -> more predictable. + + + + + Only the most likely tokens up to a total probability of top_p can be chosen. +NOTE: Prevents choosing highly unlikely tokens. + + + + + Min-P + + + + + Minimum token probability. Higher -> more predictable. + + + + + Sets the minimum relative probability for a token to be considered. + + + + + Top-K + + + + + Size of selection pool for tokens. + + + + + Only the top K most likely tokens will be chosen from. + + + + + Max Length + + + + + Maximum response length, in tokens. + + + + + Prompt Batch Size + + + + + The batch size used for prompt processing. + + + + + Amount of prompt tokens to process at once. +NOTE: Higher values can speed up reading prompts but will use more RAM. + + + + + Repeat Penalty + + + + + Repetition penalty factor. Set to 1 to disable. + + + + + Repeat Penalty Tokens + + + + + Number of previous tokens used for penalty. + + + + + GPU Layers + + + + + Number of model layers to load into VRAM. + + + + + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. +Lower values increase CPU load and RAM usage, and make inference slower. +NOTE: Does not take effect until you reload the model. + + + + + ModelsView + + + No Models Installed + + + + + Install a model to get started using GPT4All + + + + + + + Add Model + + + + + Shows the add model view + + + + + Installed Models + + + + + Locally installed chat models + + + + + Model file + + + + + Model file to be downloaded + + + + + Description + + + + + File description + + + + + Cancel + + + + + Resume + + + + + Stop/restart/start the download + + + + + Remove + + + + + Remove model from filesystem + + + + + + Install + + + + + Install online model + + + + + <strong><font size="1"><a href="#error">Error</a></strong></font> + + + + + <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> + + + + + ERROR: $API_KEY is empty. + + + + + ERROR: $BASE_URL is empty. + + + + + enter $BASE_URL + + + + + ERROR: $MODEL_NAME is empty. + + + + + enter $MODEL_NAME + + + + + %1 GB + + + + + ? + + + + + Describes an error that occurred when downloading + + + + + Error for incompatible hardware + + + + + Download progressBar + + + + + Shows the progress made in the download + + + + + Download speed + + + + + Download speed in bytes/kilobytes/megabytes per second + + + + + Calculating... + + + + + + + + Whether the file hash is being calculated + + + + + Busy indicator + + + + + Displayed when the file hash is being calculated + + + + + enter $API_KEY + + + + + File size + + + + + RAM required + + + + + Parameters + + + + + Quant + + + + + Type + + + + + MyFancyLink + + + Fancy link + + + + + A stylized link + + + + + MyFileDialog + + + Please choose a file + + + + + MyFolderDialog + + + Please choose a directory + + + + + MySettingsLabel + + + Clear + + + + + Reset + + + + + MySettingsTab + + + Restore defaults? + + + + + This page of settings will be reset to the defaults. + + + + + Restore Defaults + + + + + Restores settings dialog to a default state + + + + + NetworkDialog + + + Contribute data to the GPT4All Opensource Datalake. + + + + + By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements. + +When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake. + +NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data! + + + + + Terms for opt-in + + + + + Describes what will happen when you opt-in + + + + + Please provide a name for attribution (optional) + + + + + Attribution (optional) + + + + + Provide attribution + + + + + Enable + + + + + Enable opt-in + + + + + Cancel + + + + + Cancel opt-in + + + + + NewVersionDialog + + + New version is available + + + + + Update + + + + + Update to new version + + + + + PopupDialog + + + Reveals a shortlived help balloon + + + + + Busy indicator + + + + + Displayed when the popup is showing busy + + + + + SettingsView + + + + Settings + + + + + Contains various application settings + + + + + Application + + + + + Model + + + + + LocalDocs + + + + + StartupDialog + + + Welcome! + + + + + ### Release Notes +%1<br/> +### Contributors +%2 + + + + + Release notes + + + + + Release notes for this version + + + + + ### Opt-ins for anonymous usage analytics and datalake +By enabling these features, you will be able to participate in the democratic process of training a +large language model by contributing data for future model improvements. + +When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All +Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you +can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake. + +NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. +You should have no expectation of chat privacy when this feature is enabled. You should; however, have +an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone +to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all +attribution information attached to your data and you will be credited as a contributor to any GPT4All +model release that uses your data! + + + + + Terms for opt-in + + + + + Describes what will happen when you opt-in + + + + + Opt-in to anonymous usage analytics used to improve GPT4All + + + + + + Opt-in for anonymous usage statistics + + + + + + Yes + + + + + Allow opt-in for anonymous usage statistics + + + + + + No + + + + + Opt-out for anonymous usage statistics + + + + + Allow opt-out for anonymous usage statistics + + + + + Opt-in to anonymous sharing of chats to the GPT4All Datalake + + + + + + Opt-in for network + + + + + Allow opt-in for network + + + + + Allow opt-in anonymous sharing of chats to the GPT4All Datalake + + + + + Opt-out for network + + + + + Allow opt-out anonymous sharing of chats to the GPT4All Datalake + + + + + ThumbsDownDialog + + + Please edit the text below to provide a better response. (optional) + + + + + Please provide a better response... + + + + + Submit + + + + + Submits the user's response + + + + + Cancel + + + + + Closes the response dialog + + + + + main + + + <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> + + + + + GPT4All v%1 + + + + + Restore + + + + + Quit + + + + + <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. + + + + + Connection to datalake failed. + + + + + Saving chats. + + + + + Network dialog + + + + + opt-in to share feedback/conversations + + + + + Home view + + + + + Home view of application + + + + + Home + + + + + Chat view + + + + + Chat view to interact with models + + + + + Chats + + + + + + Models + + + + + Models view for installed models + + + + + + LocalDocs + + + + + LocalDocs view to configure and use local docs + + + + + + Settings + + + + + Settings view for application configuration + + + + + The datalake is enabled + + + + + Using a network model + + + + + Server mode is enabled + + + + + Installed models + + + + + View of installed models + + + +