top | item 47124437

(no title)

nateb2022 | 7 days ago

I'm sure I'm not the only one hesitant to provide a 3rd party virtually MITM access to both my LLM usage + API keys. If this were capable of running locally, or even just an API for compressing non-sensitive parts of a prompt, I think it would be much easier to adopt.

discuss

order

christalingx|7 days ago

Hi! You only need our API for the compression part — API keys and LLM usage are entirely managed by your own application. We don't have access to your SaaS, and we don't even know its name. We simply receive the text through our API, compress it, and return the response to your app. Your LLM — whether local, OpenAI, Claude, or any other — then processes it using your own API keys. Your data stays safe with you. And we NEVER ask for your LLM API keys. Let me know if you have any question :)

nateb2022|7 days ago

Wouldn't the example code:

  from openai import OpenAI

  client = OpenAI(
      base_url="https://agentready.cloud/v1",     # ← only change
      api_key="ak_...",                           # AgentReady key
      default_headers={
          "X-Upstream-API-Key": "sk-..."          # your OpenAI key
      }
  )

  # Every call is now compressed automatically
  response = client.chat.completions.create(
      model="gpt-4o",
      messages=[{"role": "user", "content": your_long_prompt}]
  )
provide you our OpenAI key (via the X-Upstream-API-Key header)?