Optional
apiAPI Key for the service, Usage.local services do not need an API key
Optional
attachmentsAttachments to send to the model
Optional
baseBase URL for the service
Optional
extendedReturns an extended response with Response, PartialStreamResponse and StreamResponse types
Optional
inputOptional
jsonEnables JSON mode in LLM if available and parses output with parsers.json
Optional
max_Optional
max_Maximum number of tokens to use when thinking is enabled
Optional
max_Maximum number of tokens to generate
Optional
messagesMessages to send to the model
Optional
modelModel to use, defaults to Ollama.DEFAULT_MODEL model
Optional
parserCustom parser function, defaults include parsers.json, parsers.xml, parsers.codeBlock and parsers.markdown
Optional
qualityQuality filter when dealing with model usage
Optional
reasoningOptional
serviceService to use, defaults to Ollama
Optional
streamEnables streaming mode
Optional
temperatureTemperature for the model
Optional
thinkEnables thinking mode
Optional
toolsTools available for the model to use, will enable Options.extended