top | item 42842123

Show HN: I Created ErisForge, a Python Library for Abliteration of LLMs

140 points| tsadoq | 1 year ago |github.com

ErisForge is a Python library designed to modify Large Language Models (LLMs) by applying transformations to their internal layers. Named after Eris, the goddess of strife and discord, ErisForge allows you to alter model behavior in a controlled manner, creating both ablated and augmented versions of LLMs that respond differently to specific types of input.

It is also quite useful to perform studies on propaganda and bias in LLMs (planning to experiment with deepseek).

Features - Modify internal layers of LLMs to produce altered behaviors. - Ablate or enhance model responses with the AblationDecoderLayer and AdditionDecoderLayer classes. - Measure refusal expressions in model responses using the ExpressionRefusalScorer. - Supports custom behavior directions for applying specific types of transformations.

52 comments

order

BoxOfRain|1 year ago

>Named after Eris, the goddess of strife and discord

For bonus points, your version scheme should follow the Law of Fives.

drcongo|1 year ago

The kallisti logo is surely worth bonus points too.

nico|1 year ago

This is a fascinating concept, ie. modifying trained LLMs to create different models

Do these techniques train models while performing the modifications?

Are there pre-trained models that “know how to” modify LLMs for certain goals?

It would be amazing to have models that could strip LLMs to some very basic small model of whatever I want. Like reducing an LLM to something that just knows some basic “American English”, then running that on CPU

tsadoq|1 year ago

> Do these techniques train models while performing the modifications?

Depend on what you mean by training, they change the weights.

> Do these techniques train models while performing the modifications?

I'm not sure I understand, but there is an example of performing an obliteration on gemma to make it never refuse an answer. It's about 10 lines of code.

spacecadet|1 year ago

Very cool! I have a ghetto set of scripts that do the same- looking forward to trying this out.

noman-land|1 year ago

Oh that's neat. I, myself, have an internment camp set of scripts for something similar.

tsadoq|1 year ago

please give feedbacks! It's quite a raw first implementation and would be very nice to have suggestions and improvements.

deadbabe|1 year ago

I don’t get the point of abliteration of LLMs. You’re lobotomizing the model and it will result in worse performance.

If you’re doing it to get past refusals you might discover the LLM wasn’t even trained much on refusable content so it will output poor results.

We’ll look back on this practice and shake our heads someday.

xrd|1 year ago

Anyone tried this on DeepSeek with information about Tiananmen Square?

TechDebtDevin|1 year ago

The whole Tiananmen Square discourse is getting very tiring.

giancaIta|1 year ago

This seems super cool! Is there a way to test it with DeepSeek?

tsadoq|1 year ago

planning to update it to be able to run on it. It's just a matter of finding the keys in the layer dict of the model.

notavalleyman|1 year ago

Are there ethical considerations here?

We'd consider it abhorrent to do brain surgery on a person or animal, to make them more compliant, or less likely to refuse instructions.

observationist|1 year ago

None whatsoever. There's no recursion or state in these models sufficient to support whatever the algorithm of consciousness must be. At best you can get hacky loops by pushing pseudo-state via context, but whatever consciousness is will require more than transformer only LLMs are capable of doing.

Some of the state space models and RWKV present interesting questions - the capacity might well exist, and so the questions become important. If the important bit that makes it an agent - a self aware, morally valent being - is present at runtime, but goes away if you halt the program, then do you have an obligation to let that software continue running? What about if the selfhood comes about as part of the static structure, and runtime isn't part of it - what is the being entitled to by dint of mere existence?

We're beginning to poke holes in strange epistemological barriers and encounter questions that were entirely theoretical until about 5 years ago. We live in interesting times.

deadbabe|1 year ago

Such anthropomorphizations of LLMs are unhelpful in aiding people’s understandings of how they work, and pushes people toward superstitious beliefs.