Complete documentation for AIKit
Creates a new AIKit instance with the specified configuration.
| Name | Type | Required | Description |
|---|---|---|---|
provider |
string | Yes* | AI provider name: 'openai', 'claude', or 'gemini' |
apiKey |
string | Yes* | API key for the provider |
providers |
array | No | Array of provider configurations for multi-provider setup |
enableCache |
boolean | No | Enable LocalStorage caching (default: true) |
enableCostTracking |
boolean | No | Enable cost tracking (default: true) |
maxRetries |
number | No | Maximum retry attempts (default: 3) |
timeout |
number | No | Request timeout in milliseconds (default: 30000) |
autoFallback |
boolean | No | Auto-switch to next provider on failure (default: false) |
* Required unless using providers array
// Single provider
const ai = new AIKit({
provider: 'openai',
apiKey: 'sk-...',
enableCache: true,
maxRetries: 3
});
// Multiple providers with auto-fallback
const ai = new AIKit({
providers: [
{ name: 'openai', apiKey: 'key1', priority: 1 },
{ name: 'claude', apiKey: 'key2', priority: 2 }
],
autoFallback: true
});
Send a chat message to the AI and get a response.
| Name | Type | Required | Description |
|---|---|---|---|
message |
string | Yes | User message to send |
options |
object | No | Request options (see below) |
| Option | Type | Description |
|---|---|---|
model |
string | Model to use (e.g., 'gpt-4', 'claude-3-opus') |
temperature |
number | Creativity level (0.0-1.0) |
maxTokens |
number | Maximum response length |
systemMessage |
string | System prompt |
history |
array | Conversation history |
validate |
object | Response validation rules |
skipCache |
boolean | Bypass cache (default: false) |
Promise that resolves to response object:
{
text: string, // Response text
usage: {
input: number, // Input tokens
output: number, // Output tokens
total: number // Total tokens
},
model: string, // Model used
finishReason: string, // Completion reason
fromCache: boolean, // Whether from cache
timestamp: string // ISO timestamp
}
const response = await ai.chat('Explain quantum computing', {
model: 'gpt-4o-mini',
temperature: 0.7,
maxTokens: 500,
validate: {
minLength: 100,
mustInclude: ['quantum']
}
});
console.log(response.text);
console.log(`Tokens used: ${response.usage.total}`);
Get usage statistics.
{
totalRequests: number,
successfulRequests: number,
failedRequests: number,
cachedResponses: number
}
const stats = ai.getStats();
console.log(`Success rate: ${stats.successfulRequests / stats.totalRequests * 100}%`);
Get detailed cost report.
{
total: string, // Total cost as number string
totalUSD: string, // Total cost formatted as USD
byProvider: object, // Cost breakdown by provider
byModel: object, // Cost breakdown by model
totalRequests: number, // Total number of requests
averageCostPerRequest: string,
dailyAverage: string,
startDate: string,
lastRequest: string
}
const report = ai.getCostReport();
console.log(`Total spent: ${report.totalUSD}`);
console.log(`Average per request: ${report.averageCostPerRequest}`);
Clear all cached responses.
boolean - true if cache was cleared, false if caching is disabled
ai.clearCache();
Update configuration options.
| Name | Type | Description |
|---|---|---|
newConfig |
object | Configuration options to update |
ai.updateConfig({
provider: 'claude',
maxRetries: 5
});
Available options for the chat() method:
| Option | Type | Default | Description |
|---|---|---|---|
model |
string | Provider default | Specific model to use (e.g., 'gpt-4o-mini', 'claude-3-5-sonnet-20241022', 'gemini-1.5-flash') |
temperature |
number | 0.7 | Response creativity (0.0-1.0) |
maxTokens |
number | - | Maximum response length |
topP |
number | - | Nucleus sampling parameter |
topK |
number | - | Top-K sampling parameter |
systemMessage |
string | - | System prompt/instructions |
history |
array | [] | Previous conversation messages |
Response validation options for the validate parameter:
| Rule | Type | Description |
|---|---|---|
minLength |
number | Minimum response length in characters |
maxLength |
number | Maximum response length in characters |
mustInclude |
array | Keywords that must appear in response |
mustNotInclude |
array | Keywords that must not appear |
format |
string | 'json', 'email', 'url', 'number', 'markdown' |
language |
string | 'korean', 'english', 'japanese', 'chinese' |
regex |
string | Custom regex pattern |
const response = await ai.chat('Write a JSON response', {
validate: {
format: 'json',
minLength: 50,
mustInclude: ['name', 'value']
}
});
{
model: 'gpt-4o-mini', // or 'gpt-3.5-turbo'
temperature: 0.7,
topP: 1,
frequencyPenalty: 0,
presencePenalty: 0
}
{
model: 'claude-3-5-sonnet-20241022', // sonnet, haiku, opus
temperature: 1.0,
topP: 1,
topK: 40
}
{
model: 'gemini-1.5-flash',
temperature: 0.9,
topK: 40,
topP: 1,
safetySettings: [...]
}
AIKit throws errors with descriptive messages. Always use try-catch:
try {
const response = await ai.chat('Hello');
console.log(response.text);
} catch (error) {
console.error('AI request failed:', error.message);
// Common errors:
// - "API key is required"
// - "Network error: Unable to reach API endpoint"
// - "Validation failed: ..."
// - "All attempts failed. Last error: ..."
}
Type definitions are included. For TypeScript projects:
import AIKit from '@lukaplayground/aikit';
const ai: AIKit = new AIKit({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!
});