(no title)
writingdna | 8 days ago
What is underappreciated is how much stylistic signal lives in what information retrieval people call "burstiness" -- the tendency for distinctive words to cluster rather than distribute evenly. Hemingway's short declarative stacking, DFW's recursive parentheticals, legal writing's formulaic precision -- these are all bursty patterns that a model trained to maximize expected reward will sand down. You can partially recover it with few-shot prompting, but the model is fighting its own reward gradient the entire time.
The practical question is whether you can encode a style prior that survives the decoding process. The research on authorship attribution (stylometry) suggests the feature set is well-understood -- function word frequencies, sentence length distributions, type-token ratios, syntactic complexity metrics. But nobody has built a production system that uses those features as a constraint during generation rather than just detection.
No comments yet.