Hi!
You only need our API for the compression part — API keys and LLM usage are entirely managed by your own application. We don't have access to your SaaS, and we don't even know its name. We simply receive the text through our API, compress it, and return the response to your app. Your LLM — whether local, OpenAI, Claude, or any other — then processes it using your own API keys. Your data stays safe with you. And we NEVER ask for your LLM API keys. Let me know if you have any question :)
nateb2022|6 days ago
christalingx|6 days ago
The cleaner architecture — and what we should have shown — is a two-step approach where our API only handles compression, and your key never leaves your environment:
# Step 1: call AgentReady only to compress import requests
compressed = requests.post("https://agentready.cloud/v1/compress", headers={"Authorization": "ak_..."}, json={"messages": [{"role": "user", "content": your_long_prompt}]} ).json()
# Step 2: call OpenAI directly with YOUR key — we never see it from openai import OpenAI client = OpenAI(api_key="sk-...") response = client.chat.completions.create( model="gpt-4o", messages=compressed["messages"] )
This way AgentReady only touches the text for compression — never your LLM API key. We’ll update the docs and example code accordingly ASAP. Thanks for pushing on this.