top | item 46849320

(no title)

speakingmoistly | 28 days ago

It however comes with organizations monitoring employees invasively. Harm reduction in all of the above can be achieved by having sane PTO / sick leave policies, and by decoupling healthcare from employment (in the US case).

People are not working while ill or diminished because they love their work, they do it because they do not have the option to take time off to recover without consequences. Of course, businesses won't pick a human decency option and would rather work people to the bone and strap sensors to them to know when to discard them for the next in line as long as it maintains the current power dynamic.

discuss

order

accofrisk|28 days ago

You raise an important and entirely legitimate concern - coercive monitoring and power imbalance can indeed turn any technology into something harmful. In this case, however, the core issue is not the data or AI itself, but the conditions under which it is used.

The harm reduction measures you mention, along with sane paid time off policies that allow employees to take sick leave without fear of financial or professional consequences, and decoupling healthcare from employment, are necessary systemic changes. But they are not mutually exclusive with preventive medicine. Even in countries with strong social safety nets, people operate vehicles, work with dangerous equipment, and make decisions where sudden health deterioration can put others at risk.

The key distinction is voluntariness, transparency, and user control - not employer control:

data belongs to the individual,

the employer does not see raw metrics, only a binary fitness or risk signal,

usage is limited to specific safety-critical scenarios, not HR evaluation or productivity tracking.

This kind of technology should not force people to work while ill. On the contrary, it can provide an objective reason not to work, especially in environments where decisions today are driven by subjective judgment or fear of consequences. Currently, people are often removed only after an incident occurs. Preventive signals create an opportunity to avoid that.

The risk lies not in sensors, but in poor regulation. By the same logic, breathalyzers for drivers or medical clearance for pilots could be called dystopian, yet we accept them because they demonstrably save lives.

The real question is not whether such technology should exist, but how to ensure it is designed and governed in a way that serves individuals rather than working against them.