> its work was given prominence in the engineering operation by former Chief Technology Officer Michael Schroepfer, who announced last year that he was stepping down.
At the end of 2016, I joined Twitter to lead one of the anti-abuse engineering teams. My boss, who led the effort, was great: loved Twitter, hated abuse, very smart. 6 months later the CTO, who had been behind the creation of the anti-abuse engineering effort, left. My boss, previously gung ho, left in a way that made me think he was getting signals from his boss. And shortly after that, said boss's boss said that our team had succeeded so well that there was no need for us. He scattered the engineers, laid of the managers, me included, and declared victory. We all laughed bitterly.
What these have in common for me is a high-level executive launching a special team with great fanfare in a way that addresses a PR problem. But because PR problems are generally fleeting, as soon as do-goodery loses its executive sponsor, everybody goes right back to the short-term incentives, meaning things like "responsibility" go right out the door. At least beyond the level that will trigger another PR disaster.
And if you're wondering why you don't hear more about things like this, you're not supposed to. At least for the managers laid off, it was a surprise meeting and then getting walked out the door. In the surprise meeting, they offered a fair chunk of money (for me, $40k) to sign an additional NDA plus non-disparagement agreement. I happen to have a low burn rate and good savings, so I didn't sign. But I know plenty of people who have looked at the mortgage and kid expenses versus the sudden lack of income and eventually signed.
I’m slightly surprised they’d just lay you off instead of trying to fill other internal positions first. That seems like it’d cost less than $40k for one thing.
As someone who (after 15+ years of career) has been feeling alienated (or worse, mocked for) when bringing up ethical issues around tech I work or design I wanted to thank you for the integrity. It feels good to not be alone.
> In the surprise meeting, they offered a fair chunk of money
People are getting hung up on the dollar figure in replies here, which I think misses the point.
It's super common for companies to tie a severance package (assuming more than statutory requirement, ymmv with jurisdiction) to a non-disclosure, non-disparagement, non-compete that is often far more aggressive than whatever you signed at the beginning. It's definitely worth thinking about whether or not the agreement makes sense to you.
This is one of the big differences with executive contracts; often all of this is already sorted out at time of hire, and everyone is pretty much open eyed about the failure modes.
It's really hard to find a good metric for "Keeping a social network healthy". Most of the changes introduced by a responsibility team will inherently hurt short term metrics or block others from making a change.
I wonder if there is room for a tool which tracked the rate at which users interact with negative or psychologically high risk content on a user generated site.
> our team had succeeded so well that there was no need for us
> At least for the managers laid off, it was a surprise meeting and then getting walked out the door.
I don't get it. Call me naive, but when a team is dismantled because of change in business priorities, people don't get fired. Unless the company is shrinking, valuable employees are reassigned to different teams.
> And shortly after that, said boss's boss said that our team had succeeded so well that there was no need for us.
This is infuriating. The boss’s boss knew this wasn’t true. You knew it wasn’t true. Why lie? Especially with such an obvious and unconvincing lie? Twitter is so addicted to misinformation they’re even spreading it internally and off-platform.
Teams like this attract a certain type of person, and it's not a builder. In order to justify your own existence, you have to invent blockers to place in front of people who are actually trying to build things.
At a previous job I joked that groups like that would place to attract / corral employees that should be cut during the next round of layoffs.
In theory these teams could do some good things, in reality it attracts or creates horrible people with INCREDIBLE efficiency / creates a cycle of endless meetings / recommendations... and it is endless because it is in their best interest to have an endless amount of "work" and inject themselves in a way that costs them nothing, and everyone else a great deal.
I figured these teams were the easiest place to find people who provide nothing at all / get in the way of folks doing things and almost always they don't accomplish their goals anyhow. They wouldn't even know if they did accomplish their goals anyway as these groups tend to center everything on their actions / the goal is them doing things endlessly.
Somehow word got back to them about the joke, they blamed the wrong person for the joke, tried to raise a hubbub with HR (to their credit HR told them to pound sand). And then they got all laid off ...
> Teams like this attract a certain type of person, and it's not a builder.
Ditto for financial audit teams and so-called "IT security": all they do is block, I've never had anyone in either function build something or help me work faster, just additional processes and bureaucracy that slows down real work.
edit: I thought my sarcasm would be apparent, but Poe's law strikes again.
I strongly disagree. I worked at an analytics company that had and needed a privacy ethics department. They were great. It was a mix of lawyers, philosophers (degree in the subject), and former developers.
They consistently thought about the nuances of baking privacy into a product in a way that I didn't have the background or time for. Every time I worked with them, they helped me build a better product by helping me tweak the specs in a way that had minimal impact on our users (unless they were a bad actor) and strongly increased privacy protections. It was like having a specialized PM in the room
Sometimes putting blockers in front of people is absolutely what's needed. It's a common tactic for compliance and security - both of which can completely tank your company if not handled correctly. I suspect it's especially true if there are people with the skills to build who claim they are the only ones that are "actually trying to build things." Likely they don't have the right level of understanding of what needs to be built.
The idea is that those blockers or objections are valid. Something is going wrong if they continually raise issues which do not meet the organisations goals.
That said, it does sound like a strange function to allocate to a specific team.
I don't know what this team is or did. However, to paraphrase Jeff Goldblum from Jurassic Park, if everyone is preoccupied with building things, who will be the one that stops and thinks whether they should?
IME, they don't have the authority to invent/implement blockers. Instead, they attend/hold conferences, give talks, and write documents that nobody acts on. In theory, it's a good way for a company to advertise/lobby to the think-tank set. Clearly that hasn't played out for FB, hence disbanding the team.
Well put. Intentions can start off genuine, where the non builder truly believes the processes and barriers they set up improve building (and early on it's usually true), but it easily morphs into a tool for maintaining power and influence regardless of the value add.
It's like reading that BP disbands their team for renewable energy innovation. Am I supposed to be sad? I'm not. I don't care. It was all fake from the very beginning even if the team had many people who were naive enough to think otherwise.
The point of such teams is not to improve ethics. The point is to have enough credibility to affect the discussion. (E.g. observe how much "research" in AI ethics has financial ties to large companies that are heavily invested in AI-based products at the time.)
What a properly ethical corporation would do is openly admit conflicts of interests and listen to external feedback.
Meta is on the same path that Nokia and Kodak once followed.
They will survive, but they will bleed and shrink a lot, and their relevance will likely be insignificant ten years from now.
The Metaverse bet is dumb, they simply can't execute and even if they could this whole Metaverse thing is likely to stay in the videogame's realm for a very long time.
Meta's headcount grew 32% in one year, and revenues went down 1% YoY in Q2. Wall Street expects it to go down even more YoY in Q3. Layoffs usually happen in September since that's when budgets are set. So they objectively overhired, expect to shrink, are now just laying off non-revenue generating divisions.
Some can argue there's no such thing as non-revenue generating divisions. Every division contributes, and this one could have helped with the Meta brand, public perception, user retention, etc. But the real way you solve that (as AI researchers have solved) is not having a firefighting team that's expected to fix everything; it's instilling the proper behaviors, processes and culture within each and every team in the company. Having these discrete "divisions" serves PR goals more than anything, and that's expensive PR.
This is exactly like Steve Jobs laying off the Advanced Technology Group. Did that signal a "neglect" of R&D? No.
I personally believe that even Meta management is far more pessimistic about its stock than Wall Street is, and that's why they're playing it careful. Wall Street expects a 10% growth in revenue in 2023. Usercount isn't growing. So FB needs to cut spending and focus on core product and increasing revenue per user. Straightforward.
I wonder if internal estoppel is a thing? Two options
Option A
1) ethicist objects to something
2) management declines the objection
3) whistle blower reveals the declined objection
4) public outcry ensues, stock price falls
5) board members become agitated
6) c-suite wakes up to another gut punch
Option B
1) ethicist raises an objection
2) management acts on the suggestion
3) revenue misses target
4) stock price falls
5) board members become agitated
6) C-Suite wakes up to another gut-punch
> Facebook dating team’s decision to avoid including a filter that would let users target or exclude potential love interests of a particular race
Lots of minorities actually love this feature because they can find people from their own community. If it were up to teams like this, they would remove the gender filter as well. I’m glad these crazy Twitter people were fired.
Not to sound too bleak, but isn't pandora's jar already opened as of many years ago? We can hardly perceive the societal harms social media has done given we still live in the era where it is the main metaphor. You cannot reform it once it's out in other words.
> The team was made up of engineers as well as people with backgrounds in civil rights and ethics, and advised the company’s product teams on “potential harms across a broad spectrum of societal issues and dilemmas
I suppose the question is whether people are surprised that this team existed in the first place. It sounds like it adds lots of legal liability of knowing about certain problems and not doing anything about them in due time. I wonder what they ended up finding if they found anything at all not already known to the public.
I have a question about incentives in organizations. If you form a team to identify problems in your org, wouldn't the team keep finding more problems, no matter what, to justify the value of themselves? A DEI officer will find more injustice in a university so over the years U-M History Department had 2.3 DEI officers per staff member. Or Gebru's team had been finding more and more egregious behavior in Google AI.
On the other hand, security teams are highly regarded in a company, even though they are supposed to identify security/privacy problems in the org as well. What made security different from those ethics teams?
"Group was put in place to address potential downsides of the company’s products; Meta says efforts will continue"
should really say
"Group was put in place to make people think the company cares about the downsides of the company's products; Meta says something meaningless about continuing efforts"
Facebook used to be organized at a high level into different groups named after different company goals. Like Engagement, Growth, Utility. Facebook should be engaging, Facebook should grow, Facebook should be useful to people. Eventually they got rid of the "Utility" group while keeping "Engagement" and "Growth", leaving a bunch of us feeling like... I guess Facebook gave up on being useful?
I bet they are having to take a long hard look at where they are spending their money. A team like that is great when money is plentiful, but a drain when it's not.
It is impossible to have an internal oversight team that’s without a conflict of interest.
From law enforcement to newspapers to government to technology — unless there’s demand from the public and oversight is independently funded and managed, without fail, function of team will end up either functionally meaningless or aligned to the parent organization’s core objectives.
Beyond that, no venture is without flaws, and until people are able to acknowledge that optimal solution do not mean zero negative impact, it will be a race to the bottom for which another culture that’s able to manage the complexity either by luck or skill will eventually replace those who are unable to do so.
[+] [-] neonate|3 years ago|reply
[+] [-] wpietri|3 years ago|reply
> its work was given prominence in the engineering operation by former Chief Technology Officer Michael Schroepfer, who announced last year that he was stepping down.
At the end of 2016, I joined Twitter to lead one of the anti-abuse engineering teams. My boss, who led the effort, was great: loved Twitter, hated abuse, very smart. 6 months later the CTO, who had been behind the creation of the anti-abuse engineering effort, left. My boss, previously gung ho, left in a way that made me think he was getting signals from his boss. And shortly after that, said boss's boss said that our team had succeeded so well that there was no need for us. He scattered the engineers, laid of the managers, me included, and declared victory. We all laughed bitterly.
What these have in common for me is a high-level executive launching a special team with great fanfare in a way that addresses a PR problem. But because PR problems are generally fleeting, as soon as do-goodery loses its executive sponsor, everybody goes right back to the short-term incentives, meaning things like "responsibility" go right out the door. At least beyond the level that will trigger another PR disaster.
And if you're wondering why you don't hear more about things like this, you're not supposed to. At least for the managers laid off, it was a surprise meeting and then getting walked out the door. In the surprise meeting, they offered a fair chunk of money (for me, $40k) to sign an additional NDA plus non-disparagement agreement. I happen to have a low burn rate and good savings, so I didn't sign. But I know plenty of people who have looked at the mortgage and kid expenses versus the sudden lack of income and eventually signed.
[+] [-] bhaney|3 years ago|reply
[+] [-] 650REDHAIR|3 years ago|reply
I appreciate your $40k comment here!
[+] [-] astrange|3 years ago|reply
[+] [-] piva00|3 years ago|reply
[+] [-] drdec|3 years ago|reply
[+] [-] _the_inflator|3 years ago|reply
[+] [-] epups|3 years ago|reply
[+] [-] ska|3 years ago|reply
People are getting hung up on the dollar figure in replies here, which I think misses the point.
It's super common for companies to tie a severance package (assuming more than statutory requirement, ymmv with jurisdiction) to a non-disclosure, non-disparagement, non-compete that is often far more aggressive than whatever you signed at the beginning. It's definitely worth thinking about whether or not the agreement makes sense to you.
This is one of the big differences with executive contracts; often all of this is already sorted out at time of hire, and everyone is pretty much open eyed about the failure modes.
[+] [-] lumost|3 years ago|reply
I wonder if there is room for a tool which tracked the rate at which users interact with negative or psychologically high risk content on a user generated site.
[+] [-] yodsanklai|3 years ago|reply
> At least for the managers laid off, it was a surprise meeting and then getting walked out the door.
I don't get it. Call me naive, but when a team is dismantled because of change in business priorities, people don't get fired. Unless the company is shrinking, valuable employees are reassigned to different teams.
[+] [-] ncr100|3 years ago|reply
- "Pump and dump."
- "Savior to villain."
I see this pattern frequently - with corporations' power and influence, their ethics go checked only by law and public sentiment..
[+] [-] Waterluvian|3 years ago|reply
[+] [-] elliekelly|3 years ago|reply
This is infuriating. The boss’s boss knew this wasn’t true. You knew it wasn’t true. Why lie? Especially with such an obvious and unconvincing lie? Twitter is so addicted to misinformation they’re even spreading it internally and off-platform.
[+] [-] rogerkirkness|3 years ago|reply
[+] [-] jiveturkey|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] strix_varius|3 years ago|reply
[+] [-] duxup|3 years ago|reply
In theory these teams could do some good things, in reality it attracts or creates horrible people with INCREDIBLE efficiency / creates a cycle of endless meetings / recommendations... and it is endless because it is in their best interest to have an endless amount of "work" and inject themselves in a way that costs them nothing, and everyone else a great deal.
I figured these teams were the easiest place to find people who provide nothing at all / get in the way of folks doing things and almost always they don't accomplish their goals anyhow. They wouldn't even know if they did accomplish their goals anyway as these groups tend to center everything on their actions / the goal is them doing things endlessly.
Somehow word got back to them about the joke, they blamed the wrong person for the joke, tried to raise a hubbub with HR (to their credit HR told them to pound sand). And then they got all laid off ...
[+] [-] sangnoir|3 years ago|reply
Ditto for financial audit teams and so-called "IT security": all they do is block, I've never had anyone in either function build something or help me work faster, just additional processes and bureaucracy that slows down real work.
edit: I thought my sarcasm would be apparent, but Poe's law strikes again.
[+] [-] mind-blight|3 years ago|reply
They consistently thought about the nuances of baking privacy into a product in a way that I didn't have the background or time for. Every time I worked with them, they helped me build a better product by helping me tweak the specs in a way that had minimal impact on our users (unless they were a bad actor) and strongly increased privacy protections. It was like having a specialized PM in the room
[+] [-] UncleMeat|3 years ago|reply
[+] [-] marricks|3 years ago|reply
- should we do this?
- who do we hurt by doing this?
- oh god people are hurting why are we still doing this?
[+] [-] agentdrtran|3 years ago|reply
[+] [-] scifibestfi|3 years ago|reply
[+] [-] thomassmith65|3 years ago|reply
Steve Jobs: "focus is about saying no"
Mark Zuckerberg: "move fast and breaks things"
Facebook is the world's least-trusted, most-reviled brand. They earned it by building garbage.
[+] [-] opmelogy|3 years ago|reply
[+] [-] benjaminwootton|3 years ago|reply
That said, it does sound like a strange function to allocate to a specific team.
[+] [-] mertd|3 years ago|reply
[+] [-] __derek__|3 years ago|reply
[+] [-] thatoneguytoo|3 years ago|reply
[+] [-] dgs_sgd|3 years ago|reply
[+] [-] BrainVirus|3 years ago|reply
The point of such teams is not to improve ethics. The point is to have enough credibility to affect the discussion. (E.g. observe how much "research" in AI ethics has financial ties to large companies that are heavily invested in AI-based products at the time.)
What a properly ethical corporation would do is openly admit conflicts of interests and listen to external feedback.
[+] [-] stephc_int13|3 years ago|reply
They will survive, but they will bleed and shrink a lot, and their relevance will likely be insignificant ten years from now.
The Metaverse bet is dumb, they simply can't execute and even if they could this whole Metaverse thing is likely to stay in the videogame's realm for a very long time.
[+] [-] concinds|3 years ago|reply
Meta's headcount grew 32% in one year, and revenues went down 1% YoY in Q2. Wall Street expects it to go down even more YoY in Q3. Layoffs usually happen in September since that's when budgets are set. So they objectively overhired, expect to shrink, are now just laying off non-revenue generating divisions.
Some can argue there's no such thing as non-revenue generating divisions. Every division contributes, and this one could have helped with the Meta brand, public perception, user retention, etc. But the real way you solve that (as AI researchers have solved) is not having a firefighting team that's expected to fix everything; it's instilling the proper behaviors, processes and culture within each and every team in the company. Having these discrete "divisions" serves PR goals more than anything, and that's expensive PR.
This is exactly like Steve Jobs laying off the Advanced Technology Group. Did that signal a "neglect" of R&D? No.
I personally believe that even Meta management is far more pessimistic about its stock than Wall Street is, and that's why they're playing it careful. Wall Street expects a 10% growth in revenue in 2023. Usercount isn't growing. So FB needs to cut spending and focus on core product and increasing revenue per user. Straightforward.
[+] [-] musesum|3 years ago|reply
Option A
Option B Choose.[+] [-] 3qz|3 years ago|reply
Lots of minorities actually love this feature because they can find people from their own community. If it were up to teams like this, they would remove the gender filter as well. I’m glad these crazy Twitter people were fired.
[+] [-] thenerdhead|3 years ago|reply
> The team was made up of engineers as well as people with backgrounds in civil rights and ethics, and advised the company’s product teams on “potential harms across a broad spectrum of societal issues and dilemmas
I suppose the question is whether people are surprised that this team existed in the first place. It sounds like it adds lots of legal liability of knowing about certain problems and not doing anything about them in due time. I wonder what they ended up finding if they found anything at all not already known to the public.
[+] [-] lazyfanatic|3 years ago|reply
[+] [-] mupuff1234|3 years ago|reply
[+] [-] worker767424|3 years ago|reply
[+] [-] hintymad|3 years ago|reply
On the other hand, security teams are highly regarded in a company, even though they are supposed to identify security/privacy problems in the org as well. What made security different from those ethics teams?
[+] [-] colpabar|3 years ago|reply
should really say
"Group was put in place to make people think the company cares about the downsides of the company's products; Meta says something meaningless about continuing efforts"
[+] [-] lacker|3 years ago|reply
[+] [-] bhhaskin|3 years ago|reply
[+] [-] O__________O|3 years ago|reply
From law enforcement to newspapers to government to technology — unless there’s demand from the public and oversight is independently funded and managed, without fail, function of team will end up either functionally meaningless or aligned to the parent organization’s core objectives.
Beyond that, no venture is without flaws, and until people are able to acknowledge that optimal solution do not mean zero negative impact, it will be a race to the bottom for which another culture that’s able to manage the complexity either by luck or skill will eventually replace those who are unable to do so.
[+] [-] Mountain_Skies|3 years ago|reply
[+] [-] ThinkBeat|3 years ago|reply
[+] [-] nano9|3 years ago|reply
Whenever I come across a dystopian sounding name like that, I immediately distrust them.