AddCollectionView← Existing CollectionsAdd Document CollectionAdd a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.NameCollection name...Name of the collection to add (Required)FolderFolder path...Folder path to documents (Required)BrowseCreate CollectionAddModelView← Existing ModelsExplore ModelsDiscover and download models by keyword search...Text field for discovering and filtering downloadable modelsInitiate model discovery and filteringTriggers discovery and filtering of modelsDefaultLikesDownloadsRecentAscDescNoneSearching · %1Sort by: %1Sort dir: %1Limit: %1Network error: could not retrieve %1Busy indicatorDisplayed when the models request is ongoingModel fileModel file to be downloadedDescriptionFile descriptionCancelResumeDownloadStop/restart/start the downloadRemoveRemove model from filesystemInstallInstall online model<strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>ERROR: $API_KEY is empty.ERROR: $BASE_URL is empty.enter $BASE_URLERROR: $MODEL_NAME is empty.enter $MODEL_NAME%1 GB?Describes an error that occurred when downloading<strong><font size="1"><a href="#error">Error</a></strong></font>Error for incompatible hardwareDownload progressBarShows the progress made in the downloadDownload speedDownload speed in bytes/kilobytes/megabytes per secondCalculating...Whether the file hash is being calculatedDisplayed when the file hash is being calculatedenter $API_KEYFile sizeRAM requiredParametersQuantTypeApplicationSettingsApplicationNetwork dialogopt-in to share feedback/conversationsError dialogApplication SettingsGeneralThemeThe application color scheme.DarkLightERROR: Update system could not find the MaintenanceTool used to check for updates!<br/><br/>Did you install this application using the online installer? If so, the MaintenanceTool executable should be located one directory above where this application resides on your filesystem.<br/><br/>If you can't start it manually, then I'm afraid you'll have to reinstall.LegacyDarkFont SizeThe size of text in the application.SmallMediumLargeLanguage and LocaleThe language and locale you wish to use.System LocaleDeviceThe compute device used for text generation.Application defaultDefault ModelThe preferred model for new chats. Also used as the local server fallback.Suggestion ModeGenerate suggested follow-up questions at the end of responses.When chatting with LocalDocsWhenever possibleNeverDownload PathWhere to store local models and the LocalDocs database.BrowseChoose where to save model filesEnable DatalakeSend chats and feedback to the GPT4All Open-Source Datalake.AdvancedCPU ThreadsThe number of CPU threads used for inference and embedding.Enable System TrayThe application will minimize to the system tray when the window is closed.Enable Local API ServerExpose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.API Server PortThe port to use for the local server. Requires restart.Check For UpdatesManually check for an update to GPT4All.UpdatesChatNew ChatServer ChatChatAPIWorkerERROR: Network error occurred while connecting to the API serverChatAPIWorker::handleFinished got HTTP Error %1 %2ChatDrawerDrawerMain navigation drawer+ New ChatCreate a new chatSelect the current chat or edit the chat when in edit modeEdit chat nameSave chat nameDelete chatConfirm chat deletionCancel chat deletionList of chatsList of chats in the drawer dialogChatItemViewGPT4AllYouresponse stopped ...retrieving localdocs: %1 ...searching localdocs: %1 ...processing ...generating response ...generating questions ...CopyCopy MessageDisable markdownEnable markdown%n Source(s)%n Source%n SourcesLocalDocsEdit this message?All following messages will be permanently erased.Redo this response?Cannot edit chat without a loaded model.Cannot edit chat while the model is generating.EditCannot redo response without a loaded model.Cannot redo response while the model is generating.RedoLike responseDislike responseSuggested follow-upsChatLLMYour message was too long and could not be processed (%1 > %2). Please try again with something shorter.ChatListModelTODAYTHIS WEEKTHIS MONTHLAST SIX MONTHSTHIS YEARLAST YEARChatView<h3>Warning</h3><p>%1</p>Conversation copied to clipboard.Code copied to clipboard.The entire chat will be erased.Chat panelChat panel with optionsReload the currently loaded modelEject the currently loaded modelNo model installed.Model loading error.Waiting for model...Switching context...Choose a model...Not found: %1The top item is the current modelLocalDocsAdd documentsadd collections of documents to the chatLoad the default modelLoads the default model which can be changed in settingsNo Model InstalledGPT4All requires that you install at least one
model to get startedInstall a ModelShows the add model viewConversation with the modelprompt / response pairs from the conversationLegacy prompt template needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings.No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured.The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank.Legacy system prompt needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings.Copy%n Source(s)%n Source%n SourcesErase and reset chat sessionCopy chat session to clipboardAdd mediaAdds media to the promptStop generatingStop the current response generationAttachSingle FileReloads the model<h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for helpErase conversation?Changing the model will erase the current conversation.Reload · %1Loading · %1Load · %1 (default) →Send a message...Load a model to continue...Send messages/prompts to the modelCutPasteSelect AllSend messageSends the message/prompt contained in textfield to the modelCollectionsDrawerWarning: searching collections while indexing can return incomplete results%n file(s)%n file%n files%n word(s)%n word%n wordsUpdating+ Add DocsSelect a collection to make it available to the chat model.ConfirmationDialogOKCancelDownloadModel "%1" is installed successfully.ERROR: $MODEL_NAME is empty.ERROR: $API_KEY is empty.ERROR: $BASE_URL is invalid.ERROR: Model "%1 (%2)" is conflict.Model "%1 (%2)" is installed successfully.Model "%1" is removed.HomeViewWelcome to GPT4AllThe privacy-first LLM chat applicationStart chattingStart ChattingChat with any LLMLocalDocsChat with your local filesFind ModelsExplore and download modelsLatest newsLatest news from GPT4AllRelease NotesDocumentationDiscordX (Twitter)Githubnomic.aiSubscribe to NewsletterLocalDocsSettingsLocalDocsLocalDocs SettingsIndexingAllowed File ExtensionsComma-separated list. LocalDocs will only attempt to process files with these extensions.EmbeddingUse Nomic Embed APIEmbed documents using the fast Nomic API instead of a private local model. Requires restart.Nomic API KeyAPI key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Embeddings DeviceThe compute device used for embeddings. Requires restart.Application defaultDisplayShow SourcesDisplay the sources used for each response.AdvancedWarning: Advanced usage only.Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Document snippet size (characters)Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Max document snippets per promptMax best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.LocalDocsViewLocalDocsChat with your local files+ Add Collection<h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.No Collections InstalledInstall a collection of local documents to get started using this feature+ Add Doc CollectionShows the add model viewIndexing progressBarShows the progress made in the indexingERRORINDEXINGEMBEDDINGREQUIRES UPDATEREADYINSTALLINGIndexing in progressEmbedding in progressThis collection requires an update after version changeAutomatically reindexes upon changes to the folderInstallation in progress%%n file(s)%n file%n files%n word(s)%n word%n wordsRemoveRebuildReindex this folder from scratch. This is slow and usually not needed.UpdateUpdate the collection to the new version. This is a slow operation.ModelList<ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li><strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1<strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2<strong>Mistral Tiny model</strong><br> %1<strong>Mistral Small model</strong><br> %1<strong>Mistral Medium model</strong><br> %1<br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.cannot open "%1": %2cannot create "%1": %2%1 (%2)<strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul><ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li><ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li><strong>Connect to OpenAI-compatible API server</strong><br> %1<strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul>ModelSettingsModel%1 system message?ClearResetThe system message will be %1.removedreset to the default%1 chat template?The chat template will be %1.erasedModel SettingsCloneRemoveNameModel FileSystem MessageA message to set the context or guide the behavior of the model. Leave blank for none. NOTE: Since GPT4All 3.5, this should not contain control tokens.System message is not <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">plain text</a>.Chat TemplateThis Jinja template turns the chat into input for the model.No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured.The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank.<a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Syntax error</a>: %1Chat template is not in <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Jinja format</a>.Chat Name PromptPrompt used to automatically generate chat names.Suggested FollowUp PromptPrompt used to generate suggested follow-up questions.Context LengthNumber of input and output tokens the model sees.Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.TemperatureRandomness of model output. Higher -> more variation.Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.Top-PNucleus Sampling factor. Lower -> more predictable.Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.Min-PMinimum token probability. Higher -> more predictable.Sets the minimum relative probability for a token to be considered.Top-KSize of selection pool for tokens.Only the top K most likely tokens will be chosen from.Max LengthMaximum response length, in tokens.Prompt Batch SizeThe batch size used for prompt processing.Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.Repeat PenaltyRepetition penalty factor. Set to 1 to disable.Repeat Penalty TokensNumber of previous tokens used for penalty.GPU LayersNumber of model layers to load into VRAM.How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.ModelsViewNo Models InstalledInstall a model to get started using GPT4All+ Add ModelShows the add model viewInstalled ModelsLocally installed chat modelsModel fileModel file to be downloadedDescriptionFile descriptionCancelResumeStop/restart/start the downloadRemoveRemove model from filesystemInstallInstall online model<strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>ERROR: $API_KEY is empty.ERROR: $BASE_URL is empty.enter $BASE_URLERROR: $MODEL_NAME is empty.enter $MODEL_NAME%1 GB?Describes an error that occurred when downloadingError for incompatible hardwareDownload progressBarShows the progress made in the downloadDownload speedDownload speed in bytes/kilobytes/megabytes per secondCalculating...Whether the file hash is being calculatedBusy indicatorDisplayed when the file hash is being calculatedenter $API_KEYFile sizeRAM requiredParametersQuantTypeMyFancyLinkFancy linkA stylized linkMyFileDialogPlease choose a fileMyFolderDialogPlease choose a directoryMySettingsLabelClearResetMySettingsTabRestore defaults?This page of settings will be reset to the defaults.Restore DefaultsRestores settings dialog to a default stateNetworkDialogContribute data to the GPT4All Opensource Datalake.By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data!Terms for opt-inDescribes what will happen when you opt-inPlease provide a name for attribution (optional)Attribution (optional)Provide attributionEnableEnable opt-inCancelCancel opt-inNewVersionDialogNew version is availableUpdateUpdate to new versionPopupDialogReveals a shortlived help balloonBusy indicatorDisplayed when the popup is showing busySettingsViewSettingsContains various application settingsApplicationModelLocalDocsStartupDialogWelcome!### Release Notes
%1<br/>
### Contributors
%2Release notesRelease notes for this version### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All
Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you
can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake.
You should have no expectation of chat privacy when this feature is enabled. You should; however, have
an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone
to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all
attribution information attached to your data and you will be credited as a contributor to any GPT4All
model release that uses your data!Terms for opt-inDescribes what will happen when you opt-inOpt-in for anonymous usage statisticsYesAllow opt-in for anonymous usage statisticsNoOpt-out for anonymous usage statisticsAllow opt-out for anonymous usage statisticsOpt-in for networkAllow opt-in for networkAllow opt-in anonymous sharing of chats to the GPT4All DatalakeOpt-out for networkAllow opt-out anonymous sharing of chats to the GPT4All DatalakeThumbsDownDialogPlease edit the text below to provide a better response. (optional)Please provide a better response...SubmitSubmits the user's responseCancelCloses the response dialogmain<h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>GPT4All v%1RestoreQuit<h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.Connection to datalake failed.Saving chats.Network dialogopt-in to share feedback/conversationsHome viewHome view of applicationHomeChat viewChat view to interact with modelsChatsModelsModels view for installed modelsLocalDocsLocalDocs view to configure and use local docsSettingsSettings view for application configurationThe datalake is enabledUsing a network modelServer mode is enabledInstalled modelsView of installed models