top | item 40873472

(no title)

febeling | 1 year ago

I had similar thoughts. But let's not overlook the information asymmetry, which contributed to the dissatisfaction. I don't want to live in a world which is controlled unilaterally, and intransparently by a group of people who assume they have a full picture of the situation and assume they understand moral completely, and also don't think it necessary to explain how they think so highly of themselves.

discuss

order

mft_|1 year ago

It's an interesting question, as we have a spectrum from 'little to no transparency' through to 'full transparency' (which is pretty rare), and in the middle sits the usual approach of 'communications-team-led messaged quasi-transparency'. Difficult to know (without more info) where Shipt would have appeared on this spectrum, but given the issue, they're probably somewhere towards the 'insufficient transparency' end.

What's silly in this case is that (as others have pointed out) the new algorithm seems to have been reasonably equitable, with a genuine redistribution of payments, rather than just a cut overall. Shipt could have avoided this whole situation with a straightforward explanation of the changes, together with a few examples of the cases/jobs in which people would earn more or less.

ddulaney|1 year ago

I think the issue is that there was full transparency on pay (a fixed base rate plus a fixed percentage) and then it was changed without warning.

I work for a salary, which is fully transparent in the sense that I know what my next paycheck will be to the penny. (It’s not transparent in how it’s set, but it is week-to-week.) If my employer started paying me based on effort, and didn’t tell me what constituted effort, not only would I be pissed off but that would be completely illegal.

I’m not suggesting that this change is or should be illegal. But if it happened to me I’d find it extremely unfair.

toss1|1 year ago

YES.

If Shipt is actually trying to incentivize better performance, it seems the best way is to be completely transparent about the rewards algorithm. "Short high-value trips are now somewhat de-rated, and trips requiring more effort now have improved rewards, specifically ..." or whatever.

This "communications team" approach did everyone a disservice if Shipt mgt were really trying to improve results.

OTOH, if the actual goal was to screw workers harder, they accomplished that, as here ate arguments on HN about how this could be good for the workers, thus successfully obfuscating the goal of screw-the-workers.

braza|1 year ago

> I don't want to live in a world which is controlled unilaterally, and intransparently by a group of people who assume they have a full picture of the situation and assume they understand moral completely

I have thought about this topic for a while at the time that I worked with Law data (e.g. Family Law and Military Law), and I just came to the conclusion that several societal institutions and it's agents are inherently intransparent, even in situations where some "illusionist transparency" (there's transparency, but the magician deviates your attention to another side) is given (e.g. judiciary system, under-the-table political agreements, etc.).

That's one of the reasons I would like to have a more algorithmic society with human in the loop calling the final shots and placing the rationale on top. An algorithm will have human and institutional biases but in some sort, you can explain part of it and fine-tune it; a human making the final call on top of a given option would need to explain and rationally explain its decision. At best a human actor will use logic and make the right call, at worst it will transparently expose the biases of the individual.

A4ET8a8uTh0|1 year ago

<< That's one of the reasons I would like to have a more algorithmic society with human in the loop calling the final shots and placing the rationale on top. An algorithm will have human and institutional biases but in some sort, you can explain part of it and fine-tune it; a human making the final call on top of a given option would need to explain and rationally explain its decision. At best a human actor will use logic and make the right call, at worst it will transparently expose the biases of the individual.

I will admit that it is an interesting idea. I am not sure it would work well as a lot of the power ( and pressure to adjust as needed ) suddenly would move to the fine-tuning portion of the process to ensure human at the top can approve 'right' decisions. I am going to get my coffee now.

doctor_eval|1 year ago

> That's one of the reasons I would like to have a more algorithmic society with human in the loop calling the final shots and placing the rationale on top

But isn’t that what the rule of law is supposed to be? A set of written rules with judges at the top to interpret or moderate them when all else fails.

The problem is that, for a variety of complex reasons, the rules are not applied evenly, and sometimes only enforced opportunistically.

So I don’t see how an algorithmic society is any different from today’s society. The problem is not the ability to operate algorithmically, which we already have, but in determining what the rules should be, how they should be enforced, what the penalties should be, who pays for what, and, perhaps most importantly, how to avoid capture of the algorithmic process by special interests.

None of these problems go away with an algorithmic approach, less so if there is a judge sitting on top who can make adjustments.

hprotagonist|1 year ago

I would like to have a more algorithmic society with human in the loop calling the final shots and placing the rationale on top

Let’s re-audit the algorithm regularly; say, perhaps, a central committee revisits and revises the plan every 5 years?

randomdata|1 year ago

> a human making the final call on top of a given option would need to explain and rationally explain its decision.

To who? What you describe does not seem much different than the representation governments most of us here are accustomed to, other than the algorithm eases some day-to-day work required of the constituents. Already nobody cares, and no doubt would care even less if they could let an algorithm let them be even less involved.

theGnuMe|1 year ago

>I don't want to live in a world which is controlled unilaterally, and intransparently by a group of people who assume they have a full picture of the situation and assume they understand moral completely, and also don't think it necessary to explain how they think so highly of themselves.

We already do; uncertainty is fundamental at all levels.

cbsmith|1 year ago

I think that's a very fair point, but wouldn't that be true even if Shipt hadn't made any changes?

It feels to me like the problem wasn't the change. For all we know, the change was a net good thing. The bad thing was the context in which the change occurred.

EnigmaFlare|1 year ago

They had been transparent and it led to workers finding loopholes to make it unfair on each other. So there is value in opaqueness in that it's harder to exploit. But really a properly fair and transparent system would be better still.