I'm looking to train an LLM with the schemas that a private API can ingest and can be expected to spit out and having a hard time getting GPT4 to do it without adding in a bunch of make-believe stuff, even with low temp and high top-P. I've started looking at using a task-specific LLM to do the code generation instead, but was curious if anybody has gone down this path before.
You must log in or register to comment.