OptionalapiAPI Key for the service, Usage.local services do not need an API key
OptionalattachmentsAttachments to send to the model
OptionalbaseBase URL for the service
OptionalextendedReturns an extended response with Response, PartialStreamResponse and StreamResponse types
OptionaljsonEnables JSON mode in LLM if available and parses output with parsers.json
Optionalmax_Maximum number of tokens to use when thinking is enabled
Optionalmax_Maximum number of tokens to generate
OptionalmessagesMessages to send to the model
OptionalmodelModel to use, defaults to Ollama.DEFAULT_MODEL model
OptionaloptionsOptionalparserCustom parser function, defaults include parsers.json, parsers.xml, parsers.codeBlock and parsers.markdown
OptionalqualityQuality filter when dealing with model usage
OptionalserviceService to use, defaults to Ollama
OptionalstreamEnables streaming mode
OptionaltemperatureTemperature for the model
OptionalthinkEnables thinking mode
OptionaltoolsTools available for the model to use, will enable Options.extended