langchain/docs/modules/llms
Charles Frye e9799d6821
improves huggingface_hub example (#988)
The provided example uses the default `max_length` of `20` tokens, which
leads to the example generation getting cut off. 20 tokens is way too
short to show CoT reasoning, so I boosted it to `64`.

Without knowing HF's API well, it can be hard to figure out just where
those `model_kwargs` come from, and `max_length` is a super critical
one.
2023-02-10 17:56:15 -08:00
..
examples Harrison/openai callback (#684) 2023-01-22 23:37:01 -08:00
integrations improves huggingface_hub example (#988) 2023-02-10 17:56:15 -08:00
async_llm.ipynb Add asyncio support for LLM (OpenAI), Chain (LLMChain, LLMMathChain), and Agent (#841) 2023-02-07 21:21:57 -08:00
generic_how_to.rst Harrison/openai callback (#684) 2023-01-22 23:37:01 -08:00
getting_started.ipynb Docs refactor (#480) 2023-01-02 08:24:09 -08:00
how_to_guides.rst Add asyncio support for LLM (OpenAI), Chain (LLMChain, LLMMathChain), and Agent (#841) 2023-02-07 21:21:57 -08:00
integrations.rst Feature: linkcheck-action (#534) (#542) 2023-01-04 21:39:50 -08:00
key_concepts.md Docs refactor (#480) 2023-01-02 08:24:09 -08:00