mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2025-04-27 19:35:20 +00:00
25 KiB
25 KiB
Changelog
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog.
3.10.0 - 2025-02-24
Added
- Whitelist Granite (non-MoE) model architecture (by @ThiloteE in #3487)
- Add support for CUDA compute 5.0 GPUs such as the GTX 750 (#3499)
- Add a Remote Providers tab to the Add Model page (#3506)
Changed
- Substitute prettier default templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B (by @ThiloteE in #3471)
- Build with LLVM Clang 19 on macOS and Ubuntu (#3500)
Fixed
- Fix several potential crashes (#3465)
- Fix visual spacing issues with deepseek models (#3470)
- Add missing strings to Italian translation (by @Harvester62 in #3496)
- Update Simplified Chinese translation (by @Junior2Ran in #3467)
3.9.0 - 2025-02-04
Added
Fixed
- Fix "index N is not a prompt" when using LocalDocs with reasoning (#3451)
- Work around rendering artifacts on Snapdragon SoCs with Windows (#3450)
- Prevent DeepSeek-R1 reasoning from appearing in chat names and follow-up questions (#3458)
- Fix LocalDocs crash on Windows ARM when reading PDFs (#3460)
- Fix UI freeze when chat template is
{#
(#3446)
3.8.0 - 2025-01-30
Added
- Support DeepSeek-R1 Qwen models (#3431)
- Support for think tags in the GUI (#3440)
- Support specifying SHA256 hash in models3.json instead of MD5 (#3437)
Changed
- Use minja instead of Jinja2Cpp for significantly improved template compatibility (#3433)
Fixed
- Fix regression while using localdocs with server API (#3410)
- Don't show system messages in server chat view (#3411)
- Fix
codesign --verify
failure on macOS (#3413) - Code Interpreter: Fix console.log not accepting a single string after v3.7.0 (#3426)
- Fix Phi 3.1 Mini 128K Instruct template (by @ThiloteE in #3412)
- Don't block the gui thread for reasoning (#3435)
- Fix corruption of unicode in output of reasoning models (#3443)
3.7.0 - 2025-01-21
Added
- Add support for the Windows ARM64 target platform (CPU-only) (#3385)
Changed
- Update from Qt 6.5.1 to 6.8.1 (#3386)
Fixed
- Fix the timeout error in code interpreter (#3369)
- Fix code interpreter console.log not accepting multiple arguments (#3371)
- Remove 'X is defined' checks from templates for better compatibility (#3372)
- Jinja2Cpp: Add 'if' requirement for 'else' parsing to fix crash (#3373)
- Save chats on quit, even if the window isn't closed first (#3387)
- Add chat template replacements for five new models and fix EM German Mistral (#3393)
- Fix crash when entering
{{ a["foo"(
as chat template (#3394) - Sign the maintenance tool on macOS to prevent crash on Sequoia (#3391)
- Jinja2Cpp: Fix operator precedence in 'not X is defined' (#3402)
3.6.1 - 2024-12-20
Fixed
- Fix the stop generation button no longer working in v3.6.0 (#3336)
- Fix the copy entire conversation button no longer working in v3.6.0 (#3336)
3.6.0 - 2024-12-19
Added
- Automatically substitute chat templates that are not compatible with Jinja2Cpp in GGUFs (#3327)
- Built-in javascript code interpreter tool plus model (#3173)
Fixed
- Fix remote model template to allow for XML in messages (#3318)
- Fix Jinja2Cpp bug that broke system message detection in chat templates (#3325)
- Fix LocalDocs sources displaying in unconsolidated form after v3.5.0 (#3328)
3.5.3 - 2024-12-16
Fixed
- Fix LocalDocs not using information from sources in v3.5.2 (#3302)
3.5.2 - 2024-12-13
Added
- Create separate download pages for built-in and HuggingFace models (#3269)
Fixed
- Fix API server ignoring assistant messages in history after v3.5.0 (#3256)
- Fix API server replying with incorrect token counts and stop reason after v3.5.0 (#3256)
- Fix API server remembering previous, unrelated conversations after v3.5.0 (#3256)
- Fix mishandling of default chat template and system message of cloned models in v3.5.0 (#3262)
- Fix untranslated text on the startup dialog (#3293)
3.5.1 - 2024-12-10
Fixed
- Fix an incorrect value for currentResponse (#3245)
- Fix the default model button so it works again after 3.5.0 (#3246)
- Fix chat templates for Nous Hermes 2 Mistral, Mistral OpenOrca, Qwen 2, and remote models (#3250)
- Fix chat templates for Llama 3.2 models (#3251)
3.5.0 - 2024-12-09
Changed
- Update Italian translation (by @Harvester62 in #3236)
- Update Romanian translation (by @SINAPSA-IC in #3232)
Fixed
- Fix a few more problems with the Jinja changes (#3239)
3.5.0-rc2 - 2024-12-06
Changed
- Fade messages out with an animation when they are removed from the chat view (#3227)
- Tweak wording of edit/redo confirmation dialogs (#3228)
- Make edit/redo buttons disabled instead of invisible when they are temporarily unavailable (#3228)
3.5.0-rc1 - 2024-12-04
Added
- Add ability to attach text, markdown, and rst files to chat (#3135)
- Add feature to minimize to system tray (by @bgallois in #3109)
- Basic cache for faster prefill when the input shares a prefix with previous context (#3073)
- Add ability to edit prompts and regenerate any response (#3147)
Changed
- Implement Qt 6.8 compatibility (#3121)
- Use Jinja for chat templates instead of per-message QString.arg-style templates (#3147)
- API server: Use system message(s) from client instead of settings (#3147)
- API server: Accept messages in any order supported by the model instead of requiring user/assistant pairs (#3147)
- Remote models: Pass system message with "system" role instead of joining with user message (#3147)
Removed
- Remove option to save binary model state to disk (#3147)
Fixed
- Fix bug in GUI when localdocs encounters binary data (#3137)
- Fix LocalDocs bugs that prevented some docx files from fully chunking (#3140)
- Fix missing softmax that was causing crashes and effectively infinite temperature since 3.4.0 (#3202)
3.4.2 - 2024-10-16
Fixed
- Limit bm25 retrieval to only specified collections (#3083)
- Fix bug removing documents because of a wrong case sensitive file suffix check (#3083)
- Fix bug with hybrid localdocs search where database would get out of sync (#3083)
- Fix GUI bug where the localdocs embedding device appears blank (#3083)
- Prevent LocalDocs from not making progress in certain cases (#3094)
3.4.1 - 2024-10-11
Fixed
- Improve the Italian translation (#3048)
- Fix models.json cache location (#3052)
- Fix LocalDocs regressions caused by docx change (#3079)
- Fix Go code being highlighted as Java (#3080)
3.4.0 - 2024-10-08
Added
- Add bm25 hybrid search to localdocs (#2969)
- LocalDocs support for .docx files (#2986)
- Add support for attaching Excel spreadsheet to chat (#3007, #3028)
Changed
- Rebase llama.cpp on latest upstream as of September 26th (#2998)
- Change the error message when a message is too long (#3004)
- Simplify chatmodel to get rid of unnecessary field and bump chat version (#3016)
- Allow ChatLLM to have direct access to ChatModel for restoring state from text (#3018)
- Improvements to XLSX conversion and UI fix (#3022)
Fixed
- Fix a crash when attempting to continue a chat loaded from disk (#2995)
- Fix the local server rejecting min_p/top_p less than 1 (#2996)
- Fix "regenerate" always forgetting the most recent message (#3011)
- Fix loaded chats forgetting context when there is a system prompt (#3015)
- Make it possible to downgrade and keep some chats, and avoid crash for some model types (#3030)
- Fix scroll positition being reset in model view, and attempt a better fix for the clone issue (#3042)
3.3.1 - 2024-09-27 (v3.3.y)
Fixed
- Fix a crash when attempting to continue a chat loaded from disk (#2995)
- Fix the local server rejecting min_p/top_p less than 1 (#2996)
3.3.0 - 2024-09-20
Added
- Use greedy sampling when temperature is set to zero (#2854)
- Use configured system prompt in server mode and ignore system messages (#2921, #2924)
- Add more system information to anonymous usage stats (#2939)
- Check for unsupported Ubuntu and macOS versions at install time (#2940)
Changed
- The offline update button now directs users to the offline installer releases page. (by @3Simplex in #2888)
- Change the website link on the home page to point to the new URL (#2915)
- Smaller default window size, dynamic minimum size, and scaling tweaks (#2904)
- Only allow a single instance of program to be run at a time (#2923)
Fixed
- Bring back "Auto" option for Embeddings Device as "Application default," which went missing in v3.1.0 (#2873)
- Correct a few strings in the Italian translation (by @Harvester62 in #2872 and #2909)
- Correct typos in Traditional Chinese translation (by @supersonictw in #2852)
- Set the window icon on Linux (#2880)
- Corrections to the Romanian translation (by @SINAPSA-IC in #2890)
- Fix singular/plural forms of LocalDocs "x Sources" (by @cosmic-snow in #2885)
- Fix a typo in Model Settings (by @3Simplex in #2916)
- Fix the antenna icon tooltip when using the local server (#2922)
- Fix a few issues with locating files and handling errors when loading remote models on startup (#2875)
- Significantly improve API server request parsing and response correctness (#2929)
- Remove unnecessary dependency on Qt WaylandCompositor module (#2949)
- Update translations (#2970)
- Fix macOS installer and remove extra installed copy of Nomic Embed (#2973)
3.2.1 - 2024-08-13
Fixed
- Do not initialize Vulkan driver when only using CPU (#2843)
- Fix a potential crash on exit when using only CPU on Linux with NVIDIA (does not affect X11) (#2843)
- Fix default CUDA architecture list after #2802 (#2855)
3.2.0 - 2024-08-12
Added
- Add Qwen2-1.5B-Instruct to models3.json (by @ThiloteE in #2759)
- Enable translation feature for seven languages: English, Spanish, Italian, Portuguese, Chinese Simplified, Chinese Traditional, Romanian (#2830)
Changed
- Add missing entries to Italian transltation (by @Harvester62 in #2783)
- Use llama_kv_cache ops to shift context faster (#2781)
- Don't stop generating at end of context (#2781)
Fixed
- Case-insensitive LocalDocs source icon detection (by @cosmic-snow in #2761)
- Fix comparison of pre- and post-release versions for update check and models3.json (#2762, #2772)
- Fix several backend issues (#2778)
- Make reverse prompt detection work more reliably and prevent it from breaking output (#2781)
- Disallow context shift for chat name and follow-up generation to prevent bugs (#2781)
- Explicitly target macOS 12.6 in CI to fix Metal compatibility on older macOS (#2846)
3.1.1 - 2024-07-27
Added
- Add Llama 3.1 8B Instruct to models3.json (by @3Simplex in #2731 and #2732)
- Portuguese (BR) translation (by thiagojramos in #2733)
- Support adding arbitrary OpenAI-compatible models by URL (by @supersonictw in #2683)
- Support Llama 3.1 RoPE scaling (#2758)
Changed
- Add missing entries to Chinese (Simplified) translation (by wuodoo in #2716 and #2749)
- Update translation files and add missing paths to CMakeLists.txt (#2735)
3.1.0 - 2024-07-24
Added
- Generate suggested follow-up questions (#2634, #2723)
- Also add options for the chat name and follow-up question prompt templates
- Scaffolding for translations (#2612)
- Spanish (MX) translation (by @jstayco in #2654)
- Chinese (Simplified) translation by mikage (#2657)
- Dynamic changes of language and locale at runtime (#2659, #2677)
- Romanian translation by @SINAPSA_IC (#2662)
- Chinese (Traditional) translation (by @supersonictw in #2661)
- Italian translation (by @Harvester62 in #2700)
Changed
- Customize combo boxes and context menus to fit the new style (#2535)
- Improve view bar scaling and Model Settings layout (#2520
- Make the logo spin while the model is generating (#2557)
- Server: Reply to wrong GET/POST method with HTTP 405 instead of 404 (by @cosmic-snow in #2615)
- Update theme for menus (by @3Simplex in #2578)
- Move the "stop" button to the message box (#2561)
- Build with CUDA 11.8 for better compatibility (#2639)
- Make links in latest news section clickable (#2643)
- Support translation of settings choices (#2667, #2690)
- Improve LocalDocs view's error message (by @cosmic-snow in #2679)
- Ignore case of LocalDocs file extensions (#2642, #2684)
- Update llama.cpp to commit 87e397d00 from July 19th (#2694, #2702)
- Add support for GPT-NeoX, Gemma 2, OpenELM, ChatGLM, and Jais architectures (all with Vulkan support)
- Add support for DeepSeek-V2 architecture (no Vulkan support)
- Enable Vulkan support for StarCoder2, XVERSE, Command R, and OLMo
- Show scrollbar in chat collections list as needed (by @cosmic-snow in #2691)
Removed
Fixed
- Fix placement of thumbs-down and datalake opt-in dialogs (#2540)
- Select the correct folder with the Linux fallback folder dialog (#2541)
- Fix clone button sometimes producing blank model info (#2545)
- Fix jerky chat view scrolling (#2555)
- Fix "reload" showing for chats with missing models (#2520
- Fix property binding loop warning (#2601)
- Fix UI hang with certain chat view content (#2543)
- Fix crash when Kompute falls back to CPU (#2640)
- Fix several Vulkan resource management issues (#2694)
- Fix crash/hang when some models stop generating, by showing special tokens (#2701)