Ashr Labs TypeScript SDK
A TypeScript SDK for evaluating AI agents against Ashr Labs test datasets. Generate datasets, run your agent against test scenarios, compare expected vs actual behavior, and submit results — all with zero external dependencies beyond Node.js built-ins.
Quick Links
- Testing Your Agent — start here (end-to-end guide with EvalRunner)
- VM Integration — browser/desktop agents with VM stream logging
- Installation
- Quick Start
- Authentication
- API Reference — EvalRunner, Agent, comparators, RunBuilder, client methods
- SDK Notes — platform advisories delivered to your SDK
- Error Handling
- Examples
Requirements
- Node.js 18 or higher
- TypeScript 5.4+ (recommended)
Installation
npm install ashr-labs
Quick Example
Any agent with respond() and reset() methods works out of the box:
import { AshrLabsClient, EvalRunner } from "ashr-labs";
const client = new AshrLabsClient("tp_your_api_key_here");
const runner = await EvalRunner.fromDataset(client, 322);
await runner.runAndDeploy(myAgent, client, 322);
Or with more control:
import { AshrLabsClient, EvalRunner } from "ashr-labs";
const client = new AshrLabsClient("tp_your_api_key_here");
// Generate a dataset
const [datasetId, source] = await client.generateDataset(
"My Agent Eval",
{ /* Your agent config */ },
);
// Run the eval with progress logging
const runner = new EvalRunner(source);
const run = await runner.run(myAgent, {
onScenario: (sid, s) => console.log(`Running: ${s.title}`),
});
// Inspect metrics before submitting
const metrics = run.build().aggregate_metrics as Record<string, unknown>;
console.log(`Passed: ${metrics.tests_passed}/${metrics.total_tests}`);
console.log(`Avg similarity: ${metrics.average_similarity_score}`);
// Submit
await run.deploy(client, datasetId);
Support
For issues and feature requests, please visit our GitHub repository.