@themaximalist/llm.js
    Preparing search index...

    Interface GoogleOptions

    interface GoogleOptions {
        apiKey?: string;
        attachments?: Attachment[];
        baseUrl?: string;
        contents?: GoogleMessage[];
        extended?: boolean;
        generationConfig?: {
            maxOutputTokens?: number;
            temperature?: number;
            thinkingConfig?: { includeThoughts: boolean };
        };
        json?: boolean;
        max_thinking_tokens?: number;
        max_tokens?: number;
        messages?: Message[];
        model?: string;
        parser?: ParserResponse;
        qualityFilter?: QualityFilter;
        service?: string;
        stream?: boolean;
        system_instruction?: { parts: { text: string }[] };
        temperature?: number;
        think?: boolean;
        tools?: Tool[] | WrappedTool[] | OpenAITool[];
    }

    Hierarchy (View Summary)

    Index

    Properties

    apiKey?: string

    API Key for the service, Usage.local services do not need an API key

    attachments?: Attachment[]

    Attachments to send to the model

    baseUrl?: string

    Base URL for the service

    contents?: GoogleMessage[]
    extended?: boolean

    Returns an extended response with Response, PartialStreamResponse and StreamResponse types

    generationConfig?: {
        maxOutputTokens?: number;
        temperature?: number;
        thinkingConfig?: { includeThoughts: boolean };
    }
    json?: boolean

    Enables JSON mode in LLM if available and parses output with parsers.json

    max_thinking_tokens?: number

    Maximum number of tokens to use when thinking is enabled

    max_tokens?: number

    Maximum number of tokens to generate

    messages?: Message[]

    Messages to send to the model

    model?: string

    Model to use, defaults to Ollama.DEFAULT_MODEL model

    Custom parser function, defaults include parsers.json, parsers.xml, parsers.codeBlock and parsers.markdown

    qualityFilter?: QualityFilter

    Quality filter when dealing with model usage

    service?: string

    Service to use, defaults to Ollama

    stream?: boolean

    Enables streaming mode

    system_instruction?: { parts: { text: string }[] }
    temperature?: number

    Temperature for the model

    think?: boolean

    Enables thinking mode

    tools?: Tool[] | WrappedTool[] | OpenAITool[]

    Tools available for the model to use, will enable Options.extended