docs: update tokenizer notice in llms/getting_started (#2641)

A tiny update in docs which is spotted here:
https://github.com/hwchase17/langchain/issues/2439
This commit is contained in:
Nikita Zavgorodnii 2023-04-11 04:55:45 +01:00 committed by GitHub
parent 9d20fd5135
commit 1c979e320d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -186,7 +186,7 @@
"source": [
"**Number of Tokens:** You can also estimate how many tokens a piece of text will be in that model. This is useful because models have a context length (and cost more for more tokens), which means you need to be aware of how long the text you are passing in is.\n",
"\n",
"Notice that by default the tokens are estimated using a HuggingFace tokenizer."
"Notice that by default the tokens are estimated using [tiktoken](https://github.com/openai/tiktoken) (except for legacy version <3.8, where a HuggingFace tokenizer is used)"
]
},
{