← Back to Home

API Reference

Complete documentation for AIKit

Constructor

new AIKit(config)

Creates a new AIKit instance with the specified configuration.

Parameters

Name Type Required Description
provider string Yes* AI provider name: 'openai', 'claude', or 'gemini'
apiKey string Yes* API key for the provider
providers array No Array of provider configurations for multi-provider setup
enableCache boolean No Enable LocalStorage caching (default: true)
enableCostTracking boolean No Enable cost tracking (default: true)
maxRetries number No Maximum retry attempts (default: 3)
timeout number No Request timeout in milliseconds (default: 30000)
autoFallback boolean No Auto-switch to next provider on failure (default: false)

* Required unless using providers array

Example

// Single provider
const ai = new AIKit({
    provider: 'openai',
    apiKey: 'sk-...',
    enableCache: true,
    maxRetries: 3
});

// Multiple providers with auto-fallback
const ai = new AIKit({
    providers: [
        { name: 'openai', apiKey: 'key1', priority: 1 },
        { name: 'claude', apiKey: 'key2', priority: 2 }
    ],
    autoFallback: true
});

Methods

asyncchat(message, options)

Send a chat message to the AI and get a response.

Parameters

Name Type Required Description
message string Yes User message to send
options object No Request options (see below)

Options

Option Type Description
model string Model to use (e.g., 'gpt-4', 'claude-3-opus')
temperature number Creativity level (0.0-1.0)
maxTokens number Maximum response length
systemMessage string System prompt
history array Conversation history
validate object Response validation rules
skipCache boolean Bypass cache (default: false)

Returns

Promise that resolves to response object:

{
    text: string,           // Response text
    usage: {
        input: number,      // Input tokens
        output: number,     // Output tokens
        total: number       // Total tokens
    },
    model: string,          // Model used
    finishReason: string,   // Completion reason
    fromCache: boolean,     // Whether from cache
    timestamp: string       // ISO timestamp
}

Example

const response = await ai.chat('Explain quantum computing', {
    model: 'gpt-4o-mini',
    temperature: 0.7,
    maxTokens: 500,
    validate: {
        minLength: 100,
        mustInclude: ['quantum']
    }
});

console.log(response.text);
console.log(`Tokens used: ${response.usage.total}`);

getStats()

Get usage statistics.

Returns

{
    totalRequests: number,
    successfulRequests: number,
    failedRequests: number,
    cachedResponses: number
}

Example

const stats = ai.getStats();
console.log(`Success rate: ${stats.successfulRequests / stats.totalRequests * 100}%`);

getCostReport()

Get detailed cost report.

Returns

{
    total: string,              // Total cost as number string
    totalUSD: string,           // Total cost formatted as USD
    byProvider: object,         // Cost breakdown by provider
    byModel: object,            // Cost breakdown by model
    totalRequests: number,      // Total number of requests
    averageCostPerRequest: string,
    dailyAverage: string,
    startDate: string,
    lastRequest: string
}

Example

const report = ai.getCostReport();
console.log(`Total spent: ${report.totalUSD}`);
console.log(`Average per request: ${report.averageCostPerRequest}`);

clearCache()

Clear all cached responses.

Returns

boolean - true if cache was cleared, false if caching is disabled

Example

ai.clearCache();

updateConfig(newConfig)

Update configuration options.

Parameters

Name Type Description
newConfig object Configuration options to update

Example

ai.updateConfig({
    provider: 'claude',
    maxRetries: 5
});

Configuration Options

Chat Options

Available options for the chat() method:

Option Type Default Description
model string Provider default Specific model to use (e.g., 'gpt-4o-mini', 'claude-3-5-sonnet-20241022', 'gemini-1.5-flash')
temperature number 0.7 Response creativity (0.0-1.0)
maxTokens number - Maximum response length
topP number - Nucleus sampling parameter
topK number - Top-K sampling parameter
systemMessage string - System prompt/instructions
history array [] Previous conversation messages

Validation Rules

Response validation options for the validate parameter:

Rule Type Description
minLength number Minimum response length in characters
maxLength number Maximum response length in characters
mustInclude array Keywords that must appear in response
mustNotInclude array Keywords that must not appear
format string 'json', 'email', 'url', 'number', 'markdown'
language string 'korean', 'english', 'japanese', 'chinese'
regex string Custom regex pattern

Example

const response = await ai.chat('Write a JSON response', {
    validate: {
        format: 'json',
        minLength: 50,
        mustInclude: ['name', 'value']
    }
});

Provider-Specific Options

OpenAI

{
    model: 'gpt-4o-mini',              // or 'gpt-3.5-turbo'
    temperature: 0.7,
    topP: 1,
    frequencyPenalty: 0,
    presencePenalty: 0
}

Claude (Anthropic)

{
    model: 'claude-3-5-sonnet-20241022',  // sonnet, haiku, opus
    temperature: 1.0,
    topP: 1,
    topK: 40
}

Gemini (Google)

{
    model: 'gemini-1.5-flash',
    temperature: 0.9,
    topK: 40,
    topP: 1,
    safetySettings: [...]
}

Error Handling

AIKit throws errors with descriptive messages. Always use try-catch:

try {
    const response = await ai.chat('Hello');
    console.log(response.text);
} catch (error) {
    console.error('AI request failed:', error.message);
    
    // Common errors:
    // - "API key is required"
    // - "Network error: Unable to reach API endpoint"
    // - "Validation failed: ..."
    // - "All attempts failed. Last error: ..."
}

TypeScript Support

Type definitions are included. For TypeScript projects:

import AIKit from '@lukaplayground/aikit';

const ai: AIKit = new AIKit({
    provider: 'openai',
    apiKey: process.env.OPENAI_API_KEY!
});

Next Steps