(no title)
ZeroSolstice | 2 years ago
> Those folks have a lock bc there’s a small group of who knows assembly and OSs across multiple systems very well and knows if from a security context.
There is two parts to this. The first is for some of these business in that arena I'm sure if they could speed up analysis to take on more client jobs requiring less labor they would have done so. Second is, what output are you going to provide that wouldn't need the very same people to decipher, validate, or explain "what" is going on?As an example if you get hacked and you make a cyber insurance claim you are going to have to sufficiently explain to the insurance company what happened so they can try to get out of paying you and you won't be able to say "Xyz program says it found malware, just trust what it says." If people don't understand how the result was generated they could be implementing fixes that don't solve the problem because they are depending on the LLM/decision tree to tell them what the problem is. All these models can be gamed just like humans.
I'm not quite sure I agree that a better LLM is what has been keeping people from implementing pipeline logic to produce actionable correlation security alerts. Maybe it does improve but my assumption is much like we still have software developers any automation will just create a new field of support or inquiry that will need people to parse.
dogman144|2 years ago
- speed at which actionable insights can get generated (otherwise needing a very high paid eng poking through Ghidra and cross-log correlation)
- reduced need for very high paid DFIR engs due to the above.