understand your frustration. i trust you also understand the models have some dark corners that someone could use to misrepresent the goals of our project. if you have ideas on how we could make the models more broadly accessible while avoiding that risk, please do reach out @ history-llms@econ.uzh.ch
999900000999|2 months ago
So as a black person should I demand that all books written before the civil rights act be destroyed?
The past is messy. But it's the only way to learn anything.
All an LLM does it's take a bunch of existing texts and rebundle them. Like it or not, the existing texts are still there.
I understand an LLM that won't tell me how to do heart surgery. But I can't fear one that might be less enlightened on race issues. So many questions to ask! Hell, it's like talking to older person in real life.
I don't expect a typical 90 year old to be the most progressive person, but they're still worth listening too.
DGoettlich|2 months ago
tombh|2 months ago
I suspect restricting access could equally be a comment on modern LLMs in general, rather than the historical material specifically. For example, we must be constantly reminded not to give LLMs a level of credibility that their hallucinations would have us believe.
But I'm fascinated by the possibility that somehow resurrecting lost voices might give an unholy agency to minds and their supporting worldviews that are so anachronistic that hearing them speak again might stir long-banished evils. I'm being lyrical for dramatic affect!
I would make one serious point though, that do I have the credentials to express. The conversation may have died down, but there is still a huge question mark over, if not the legality, but certainly the ethics of restricting access to, and profiting from, public domain knowledge. I don't wish to suggest a side to take here, just to point out that the lack of conversation should not be taken to mean that the matter is settled.
qcnguy|2 months ago
Their concern can't be understood without a deep understanding of the far left wing mind. Leftists believe people are so infinitely malleable that merely being exposed to a few words of conservative thought could instantly "convert" someone into a mortal enemy of their ideology for life. It's therefore of paramount importance to ensure nobody is ever exposed to such words unless they are known to be extremely far left already, after intensive mental preparation, and ideally not at all.
That's why leftist spaces like universities insist on trigger warnings on Shakespeare's plays, why they're deadly places for conservatives to give speeches, why the sample answers from the LLM are hidden behind a dropdown and marked as sensitive, and why they waste lots of money training an LLM that they're terrified of letting anyone actually use. They intuit that it's a dangerous mind bomb because if anyone could hear old fashioned/conservative thought, it would change political outcomes in the real world today.
Anyone who is that terrified of historical documents really shouldn't be working in history at all, but it's academia so what do you expect? They shouldn't be allowed to waste money like this.
bogedy|2 months ago
qcnguy|2 months ago
We all get that academics now exist in some kind of dystopian horror where they can get transitively blamed for the existence of anyone to the right of Lenin, but bear in mind:
1. The people who might try to cancel you are idiots unworthy of your respect, because if they're against this project, they're against the study of history in its entirety.
2. They will scream at you anyway no matter what you do.
3. You used (Swiss) taxpayer funds to develop these models. There is no moral justification for withholding from the public what they worked to pay for.
You already slathered your README with disclaimers even though you didn't even release the model at all, just showed a few examples of what it said - none of which are in any way surprising. That is far more than enough. Just release the models and if anyone complains, politely tell them to go complain to the users.
ThePyCoder|2 months ago
Now were it limited in access to ask money to compensate for the time and money spent compiling the library (or training the model), sure, I'd somewhat understand. Not agree but understand.
Now it just feels like you want to prevent your model name being associated with the one guy who might use it to create a racist slur Twitter bot. There's plenty of models for that already. At least the societal balance of a model like this would also have enough weight on the positive side to be net positive.
unknown|2 months ago
[deleted]
naasking|2 months ago
diamond559|2 months ago
pigpop|2 months ago
unknown|2 months ago
[deleted]
charlesguy|2 months ago
unethical_ban|2 months ago
Movie studios have done that for years with old movies. TCM still shows Birth of a Nation and Gone with the Wind.
Edit: I saw further down that you've already done this! What more is there to do?
f13f1f1f1|2 months ago
[deleted]