Commit Graph

3 Commits

Author SHA1 Message Date
William FH
632c65d64b
Add to notebook to assist in ground truth question generation (#2523)
At the bottom of the notebook, continue to show how to generate example
test cases with the assistance of an LLM
2023-04-06 23:08:55 -07:00
William FH
629fda3957
Use JSON rather than JSON5 (#2520)
Evaluation so far has shown that agents do a reasonable job of emitting
`json` blocks as arguments when cued (instead of typescript), and `json`
permits the `strict=False` flag to permit control characters, which are
likely to appear in the response in particular.

This PR makes this change to the request and response synthesizer
chains, and fixes the temperature to the OpenAI agent in the eval
notebook. It also adds a `raise_error = False` flag in the notebook to
facilitate debugging
2023-04-06 21:14:12 -07:00
William FH
f8e4048cd8
Add an Example Evaluation Notebook for the API Chain (#2516)
Taking the Klarna API as an example, uses evaluation chain's to judge
the quality of the request and response synthesizers based on a small
set of curated queries.

Also updates intermediate steps for chain to emit a dict so each step
can be keyed for lookup


![image](https://user-images.githubusercontent.com/13333726/230505771-5cdb4de4-6fe7-4f54-b944-f29d438fa42c.png)
2023-04-06 15:58:41 -07:00