All posts
Tutorial3 min read

Integrating ziptoken with LangChain in 10 Lines

A step-by-step guide to wrapping ziptoken compression into a LangChain Runnable, so every chain automatically compresses prompts before calling the LLM.

z

ziptoken

Developer Relations

LangChain's composable Runnable interface makes it trivial to insert a compression step anywhere in your chain. Here's how to do it in TypeScript.

Install dependencies

pnpm add @langchain/core @langchain/openai

Create the compressor runnable

import { RunnableLambda } from '@langchain/core/runnables'

const compress = RunnableLambda.from(async (text: string) => {
  const res = await fetch('https://api.ziptoken.ai/api/v1/compress', {
    method: 'POST',
    headers: {
      Authorization: `Bearer ${process.env.ZIPTOKEN_API_KEY}`,
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({ text }),
  })
  return (await res.json()).compressed as string
})

Compose the chain

const chain = compress.pipe(llm).pipe(parser)
const result = await chain.invoke(userInput)

That's it. Every input flows through ziptoken before hitting your LLM β€” no other code changes needed.

Start compressing your prompts

Free tier β€” 50,000 tokens/month, no credit card required.