@themaximalist/llm.js
    Preparing search index...

    Interface OllamaOptions

    interface OllamaOptions {
        apiKey?: string;
        attachments?: Attachment[];
        baseUrl?: string;
        extended?: boolean;
        json?: boolean;
        max_thinking_tokens?: number;
        max_tokens?: number;
        messages?: Message[];
        model?: string;
        options?: { num_predict?: number };
        parser?: ParserResponse;
        qualityFilter?: QualityFilter;
        service?: string;
        stream?: boolean;
        temperature?: number;
        think?: boolean;
        tools?: Tool[] | WrappedTool[] | OpenAITool[];
    }

    Hierarchy (View Summary)

    Index

    Properties

    apiKey?: string

    API Key for the service, Usage.local services do not need an API key

    attachments?: Attachment[]

    Attachments to send to the model

    baseUrl?: string

    Base URL for the service

    extended?: boolean

    Returns an extended response with Response, PartialStreamResponse and StreamResponse types

    json?: boolean

    Enables JSON mode in LLM if available and parses output with parsers.json

    max_thinking_tokens?: number

    Maximum number of tokens to use when thinking is enabled

    max_tokens?: number

    Maximum number of tokens to generate

    messages?: Message[]

    Messages to send to the model

    model?: string

    Model to use, defaults to Ollama.DEFAULT_MODEL model

    options?: { num_predict?: number }

    Custom parser function, defaults include parsers.json, parsers.xml, parsers.codeBlock and parsers.markdown

    qualityFilter?: QualityFilter

    Quality filter when dealing with model usage

    service?: string

    Service to use, defaults to Ollama

    stream?: boolean

    Enables streaming mode

    temperature?: number

    Temperature for the model

    think?: boolean

    Enables thinking mode

    tools?: Tool[] | WrappedTool[] | OpenAITool[]

    Tools available for the model to use, will enable Options.extended