top | item 39218936

(no title)

glimow | 2 years ago

Hello btown, you are indeed raising legitimate questions here.

You are right in the sense that using automated security testing tools in production creates a risk. But there are workarounds:

1) Most of Escape's security scans happen on staging or pre-prod environments, where there is little risk of breaking something critical or finding real customer data.

2) We have designed a specific scan mode for production APIs, that is made with safety in mind. It will not attempt the riskiest attack scenarios and, thus will be safe for production use at the cost of scanning depth.

You can chose a scan mode when adding a new application for testing in Escape. So far, most of our users use both modes, one for the production environment and one for the development environment, to spot bugs early.

No user ever had problems with the production scanning mode.

By the way, the core algorithm powering Escape is more a graph traversal algorithm than LLMs. We do use a small, self-hosted LLM for specific inference tasks, but everything is made in-house, and we don't use OpenAI or any other inference API.

Hope that helps!

discuss

order

ichbinlegion|2 years ago

> It will not attempt the riskiest attack scenarios

What does that mean exactly?

Do you manually assess what is risky for a particular API, or is it up to the system to choose?

If it's up to it, what happens if it thinks that's not risky to delete user data?

glimow|2 years ago

We created specific safeguards for production mode; for instance, Escape doesn't launch any DELETE requests in prod mode.

You can also manually configure an allowlist/blocklist of operations for specific use cases.