I think whether any text is written with the help of AI is not the main issue. The real issue is that for texts like police reports a human still has to take full responsibility for its contents. If we preserve this understanding, than the question of which texts are generated by AI becomes moot.
Ekaros|2 months ago
sejje|2 months ago
moffkalast|2 months ago
Of course the problem is also that police often operates without any real oversight and covers up more misconduct than workers in an under-rug sweeping factory. But that's another issue.
jMyles|2 months ago
...is it?
It seems to me that the growth of professional police as an institution which bears increased responsibility for public safety, along with an ever-growing set of tools that can be used to defer responsibility (see: it's not murder if it's done with a stun gun, regardless of how predictable these deaths are), are actually precisely the same issue.
Let's stop allowing the state to hide behind tooling, and all be approximately equally responsible for public safety.
riedel|2 months ago
To ensure safety, those offerings must use premarket red teaming to eliminate biases in summarization. However, ethical safety also requires post-market monitoring, which is impossible if logs aren't preserved. Rather than focusing on individual cases, I think, we must demand systemic oversight in general and access for independent research (not only focussing on a specific technology)
sixhobbits|2 months ago
jMyles|2 months ago
If what you mean is, "texts upon which the singular violence of the state is legitimately imposed", then a simple solution (and I believe, on sufficiently long time scales, the happily inevitable one) is to abolish police.
I can't fathom, in an age where we have ubiquitous cameras as eyewitnesses, instant communications capability to declare emergencies and request aid from nearby humans, that we need an exclusivity entity whose job it is to advance safety in our communities. It's so, so, so much more trouble that it's worth.
tarsinge|2 months ago
ssl-3|2 months ago
But to try to answer some of what I think you're trying to ask about: The bot can be useful. It can be better at writing a coherent collection of paragraphs or subroutines than Alice or Bill might be, and it costs a lot less to employ than either of them do.
Meanwhile: The bot never complains to HR because someone looked at them sideways. The bot [almost!] never calls in sick; the bot can work nearly 24/7. The bot never slips and falls in the parking lot. The bot never promises to be on-duty while they vacation out-of-state with a VPN or uses a mouse-jiggler to screw up the metrics while they sleep off last night's bender.
The bot mostly just follows instructions.
There's lots of things the bot doesn't get right. Like, the stuff it produces may be full of hallucinations and false conclusions that need reviewed, corrected, and outright excised.
But there's lots of Bills and Alices in the world who are even worse, and the bot is a lot easier and cheaper to deal with than they are.
That said: When it comes to legal matters that put a real person's life and freedom in jeopardy, then there should be no bot involved.
If a person in a position of power (such as a police officer) can't write a meaningful and coherent report on their own, then I might suggest that this person shouldn't ever have a job where producing written reports are a part of their job. There's probably something else they're good at that they can do instead (the world needs ditchdiggers, too).
Neither the presence nor absence of a bot can save the rest of us from the impact of their illiteracy.
Spivak|2 months ago