Documentation

Everything you need to integrate, monitor, and optimize your LLM applications with TknScope.

Get started in 5 minutes

Install TknScope, wrap your LLM client, and start monitoring immediately. No configuration required to get started.

  1. 1

    Install the package

    via npm or pip

  2. 2

    Wrap your LLM client

    One line of code

  3. 3

    View your dashboard

    Real-time insights

Read the quick start
Installation
npm install tknscope
pip install tknscope
Integration Example
import { TknScope } from 'tknscope';
import OpenAI from 'openai';

const tknscope = new TknScope({
  apiKey: process.env.TKNSCOPE_API_KEY,
});

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

// Wrap your client with TknScope monitoring
const client = tknscope.wrap(openai);

// Use as normal - all requests are automatically tracked
const completion = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is token monitoring?' }
  ],
});

console.log(completion.choices[0].message.content);
import os
from tknscope import TknScope
from openai import OpenAI

tknscope = TknScope(
    api_key=os.environ.get("TKNSCOPE_API_KEY")
)

openai = OpenAI(
    api_key=os.environ.get("OPENAI_API_KEY")
)

# Wrap your client with TknScope monitoring
client = tknscope.wrap(openai)

# Use as normal - all requests are automatically tracked
completion = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is token monitoring?"}
    ]
)

print(completion.choices[0].message.content)

Need help?

Can't find what you're looking for? Our support team is here to help.