Article is interesting on the whole (I have no experience with "professional" work, and would love for suggestions as to how to be more familiar), but I latched onto this nugget:
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.
Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, I see one very concrete moral problem:
that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
That, to me, is deeply disturbing, and very very difficult to justify.
>> that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human.
Real world evidence supporting your argument:
United Health Group is currently embroiled in a class action lawsuit pertaining to using AI to auto-deny health care claims and procedures:
The plaintiffs are members who were denied benefit coverage. They claim in the lawsuit that the use of AI to evaluate claims for post-acute care resulted in denials, which in turn led to worsening health for the patients and in some cases resulted in death.
They said the AI program developed by UnitedHealth subsidiary naviHealth, nH Predict, would sometimes supersede physician judgement, and has a 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
AI adjudication of healthcare is fine but there needs to be extremely steep consequences for false negatives and a truly independent board of medical experts to appeal to. If a large panel agrees the denial was wrong, a penalty of 10-100x the cost of procedure would be assessed depending on the consequence of the denial.
I don't think there an ethical responsibility to worrying about your competitor's labor. That would lead to stagnation and it's own sort of ethical issues.
Can we imagine a world where the claims are adjudicated by an uninterested party (as far as possible)? I don't want the insurance company to decide a contractual issue, that's ridiculous. At the moment they're kept honest by the law and by public opinion (which varies by country), but the principal-agent problem is too big to ignore.
This comment is at the heart of many of the challenges tech companies face - they can scale the serving of content - but struggle to scale the content moderation and/or dispute resolution.
It's a common problem with automation - the focus is often on accelerating the 'happy' path, only to realise dealing with the exceptions is where the real challenges lie.
One tried and trust way around that is to cherry pick customers as part of your strategy. You sell insurance to people who will never claim ( and hence dispute), and shun those likely to.
However such market segmentation results in no insurance for people who would need it and the people who don't wondering why they are buying it - ie optimal efficiency for an insurance company is to simply offer no value at all.
ie you could argue the whole value proposition of an insurance company is to pool, not segmented risk, and critically to provide fair arbitration ( protecting the majority of the pool from those that would do insurance fraud, while still paying out ).
Buying 'peace of mind' requires a belief in a fair dealing insurer - that's the key scale challenge - not pricing or sales.
I don't see it as inherently a problem; AI can (theoretically) be a lot more fair in dealing with claims, and responds a lot sooner.
That said I suspect the founder is seriously overestimating the number of highly intelligent, competent people he can hire, and underestimating how much bureaucratic nonsense comes with insurance, but that's a problem he'll run into later down the road. Sometimes you have to hire three people with mediocre salaries because the sort of highly motivated competent person you want can't be found for the role.
> that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI. That, to me, is deeply disturbing, and very very difficult to justify.
I don't know. Given the human beings I've interacted with in customer support, and the number of times I've had to escalate because they were quite simply "intelligence-challenged" who couldn't even understand my issues, I'm not sure this is a bad thing.
In my limited experience with AI agents, they've been far more helpful and far faster, they actually seem to understand the issue immediately, and then either give me the solution (i.e. the obscure fact I needed in a support PDF that no regular rep would probably ever have known) or escalate me immediately to the actual right person who can help.
And regular humans will stonewall you anyway, if that's corporate policy. And then you go to the courts.
Nothing new or revolutionary, just the usual race to the cost bottom with corresponding quality bottom.
Author ignores the fact that any normal market there are variously priced insurances, yet somehow not all people flock to the cheapest one, in contrary (at least where I live). Higher fees mean ie less stressful life when dealing with insurer.
Ethical issues of putting people out of a job? Please. This mindset has to be called out because it directly causes suffering via creating a societal permission structure for politicians to protect interest groups with protectionist trade policy and internal pork barreling policy.
Economic productivity putting people out of jobs is both good and necessary and it is unethical to work against it.
There's a huge assumption in your comment -- that having 100,000 employees necessarily guarantees (or even makes likely) that you will have some human to help you.
More likely, those 100,000 humans are mostly working on sales and marketing, and the few allocated to support are all incentivized to avoid you, and to send you canned answers. A reasonably decent AI would be better at customer support than most companies give, since it'll have the same rules and policies to operate with, but will most likely be able to speak and write coherently in the language I speak.
> the only way to provide dispute resolution and customer service to 1B people is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
The Catholic church has 1B "customers" and seems to be doing ok with human-to-human interaction without the need (or desire) for AI. They do so via ~ 500K priests and another 4M lay ministers
Wanted to point to the startup the author seems to be running, which is to sell insurance somehow tied to Bitcoin: https://meanwhile.bm/
For the record, that strikes me as seriously improper. Life insurance is a heavily regulated offering intended to provide security to families. It is the opposite of bitcoin, which is a highly speculative investment asset. Those two things should not be mixed.
Also, the fact that the disclosure seems to limit sales to being only occurring in Bermuda seems intentional. I suspect that this product would be highly illegal in most if not all US states, so they must offer this only for sale in Bermuda to avoid that issue.
I think it's actually tax avoidance disguised as life insurance:
> You can borrow Bitcoin against your policy, and the borrowed BTC adopts a cost basis at the time of the loan. So if BTC were to 10x after you fund your policy, you could borrow a Bitcoin from Meanwhile at the 10x higher cost basis—meaning you could sell that BTC immediately and not owe any capital gains tax on that 10x of appreciation
My wife made a McKinsey consultant cry… she hired McKinsey for some internal project. One person on the project was a recent Harvard grad. They were in a meeting going over the deliverables along with the McKinsey partner on the project and in the meeting my wife said something to the effect that their work wasn’t up to McKinsey standards.
The junior guy started crying in the meeting. Like just blubbering. My wife still feels bad for it but still…
Weird thing, instead of firing him McKinsey kept him and stipulated that he can only be in meetings when the partner is present.
I don't care if you went to Ivy League and graduated at the top of your class, I really don't get WTF someone whose life experience has been almost exclusively in school really knows running about business.
Get at least a few years work experience and call me. Or alternatively, start your own dang business if you are really that smart.
Having worked at Mck, what I could very well imagine happened behind the scenes here was
1. This BA/Asc was on <4 hours of sleep, maybe many days in a row
2. They walked into that meeting thinking they had completed exactly what the client (your wife) wanted
And after the meeting (this I feel more confident about, as it happens a lot)
1. A conversation happened to see if the BA/Asc wanted to stay on the project
2. They said yes, and the leadership decided that the best way to make this person feel safe was to always have a more experienced person in the room to deal with hiccups (in this case, the perception of low quality work)
Genuinely funny. Had to once interface with a small team from Deloitte on a project, and pushed hard during an early meeting for them to outline the problems and scope. Just complete incompetence... I didn't make anyone cry, but definitely squirm a lot. Just asking questions about their understanding, process to close gap of understanding, and project management plans were enough to make clear to the main executive stakeholder on our end that this was going to be a trainwreck. They were fired shortly after.
This reads like a linkedin post and I'm only commenting because I'd like to hear more about the 2nd type of big-org problems he faced that he felt weren't fixable, and why - but instead got a pitch to his new startup, which I guess should've been expected from the title. Just hoped for more substance.
Oh McKinsey had a name for that program ("Leap"). I once worked at a "Telco Enterprise Startup" in Berlin founded by them.
They essentially lied about any anticipated KPI potentials and let their "tech" people put together a 15k EUR/month (before public release) platform on AWS which was such a pile of mess, it made the second year's CTO start from scratch. After some heavy arguments because of their poor performance, McKinsey agreed to let some "non-technical" people work there for a couple of months for free. All arguments you'd had with the McKinsey "Engineers" felt like talking to AWS Sales, they had barely any technical insights but a catalog of "pre-made solutions" to choose from.
Looking at the home page of Meanwhile only made me think of how life insurance is such a different thing than, say, a mortgage. With life insurance, counterparty risk matters. You don't care about your mortgage counterparty. I'm not going to buy life insurance from an insurer with Youtube videos of Anthony Pompliano on their home page. Know your enemy.
The engineer in me immediately looks for ways to map out how tax avoidance via crypto trading on life insurance funds via a Bermuda company can possibly go wrong. Insurance has a nice long term cash flow that has proved very sweet for Berkshire Hathaway, and investment on top of that gets perks for the insurance. However Crypto, which has liquidity issues and is heavily scammed/stolen would benefit far more than the users of business. The holdings would stay for decades, allowing arbitrage of the main company with user investments. If there is a leak or a collapse of the crypto, the customers won’t know it until they can’t get their funds back, but since AI is handling the claims, they may never even find out the real reason they can’t get their money back. And since it’s life insurance, the buyer might never find out, while their descendants or loved ones may not know how to deal with it or be plenty confused by the lack of customer service. Very novel scheme.
Author is not consistent. He mentions in 25 cases, the firm hiring McKinsey did not know the answer beforehand. Yet, Leap is based on firms already knowing the answer. The reason McKinsey is hired is to avoid internal conflicts over which manager takes the reigns. I doubt McKinsey is providing solutions to these industries (as in, introducing a product that was not pitched internally by someone already. In fact, in most cases, a manager will pitch the solution and McKinsey's job is purely finding the right managers to leave this internal "startup" to). Should that be the case, I would love to be proven wrong. However, every consultant I had met is no engineer or tech leader. They are merely consultants, restructuring the answers in ways that avoid conflicts within established giants. Most of them are Ivy League graduates that never worked in the technical field (got hired at Bain or McKinsey fresh out of school). Often we would make up stacks to demonstrate how ignorant they can be of technology. Managers and business people love McKinsey. As an engineer, I have not met any tech founder or technical engineer that esteemed the field (just listen to Steve Jobs' opinion on consulting). I attribute the mess that Google is under Sundar to McKinsey (not even mentioning the Opioid crisis where their hands are stained too). The redeeming factor is the author describes them as the enemy and is at least honest about his reasons for joining (stability and established resume name)
As having been through consulting a little bit, I can confirm that most of the times you pay a very expensive price for inexperienced and incompetent consulting work most of the time.
It is not malevolence but more a deficiency by design. First as you said, most consultants are not real "leaders" or "tech leaders" and when you start to get experience, you leave consulting or you raise the hierarchical ladder to become a more senior manager, that spend more time finding contracts, negotiating, dealing with the customer proposals and renewal than doing the actual job.
In the end, you have juniors that are doing the tasks pretending to be sector experts or like the guy in the article, you are propulsed "CEO" of the big entity of a big corp just after 4 or 5 years of basic consultant experience, not even having worked in real non-conducting job.
In the end, when you buy a mission to such a firm, most of the cost goes to structural costs and daily rate of a chain of useless parasitic executives like directors, executive partners, vice president, that will spend 10 minutes per month reviewing slides on the project. The consulting doing the job will be paid at most the double of what a good freelance can expect.
And regarding the spirit, the fun thing is that even when you do bad or evilish things, there is kind of a mental block brain process that makes you truly believe that you are doing an useful and much needed work. Despite it not the case.
In my own case, often I had this bad feeling deep in. A background that a customer that we were negotiating a mission for millions, could have just hired a decent developer for just a few thousands euros and have quite profited from a most good and successful system.
But like, in Matrix, when you are inside the system, it is hard to consider things outside the box.
In the same way that again in the example of the blog post, I'm quite sure that the big company would been able to find hundreds of existing employees that would have fitted the bill well and better for a lot cheaper.
"As an engineer," you've probably yet to realize "technical cofounder" is p97 a polite way to say "second-class citizen." You get more equity than a "founding engineer," I hope. So there's that.
> Meanwhile: to break into a highly-regulated, commoditized market like insurance, you need both a truly differentiated product that incumbents can't easily replicate and an associated distribution strategy that leverages their blind spots.
Having worked in highly regulated industries, I’ve learned that the best way to disrupt incumbents is by creating a product that assumes more business risk than is typically accepted. Large, regulated companies are extremely risk-averse—so if you can take on that risk in a smart, innovative way, you’ll win.
What if you take that risk by putting "crypto" in it? I think it might work out for our founder here but I am not so optimistic about the results for any of the poor schmucks suckered into this scheme.
Valuable article, it's rare to see a glimpse into McKinsey in normal human language.
The fact that the company has become a sort of pseudo-VC (mentorship but not financing) for small teams within megacorps is interesting. I wonder why large corps find it so difficult to innovate. I think that they become somewhat "load-bearing" in society and the lines between the company and the market begin to blur. Any change the company makes causes a misalignment because they shaped the market to fit themselves.
> I learned deeper truths about where startups can win and compete.
Now that I'm working at a big organization (a Fortune 500 company), I can relate. I'm by far the most innovative person in my team and I'm being held down because I'm not doing my role (as I'm not a dev but a data analyst at the moment).
If I'd be doing my role however, then we wouldn't be innovating and the C-suite wants us to innovate with AI. I'm the only one at my department that can create actual AI automations. And the IT department is basically stripped out by upper management.
If anyone wants an actual dev building AI automations and think how we can disrupt with the state of the art, my email is in my profile.
In my home country (Norway), I've met plenty of startup founders that come from MBB consulting. Actually a pretty "normal" path here, compared to jumping straight into entrepreneurship out of college. But that also has something to do with how risk-averse investors here are, compared to the US (no one here is going to give inexperienced college kids a bunch of money, unless they've proven themselves to be bona fide serious people) - and the fact that consultants actively get to see market needs in real time, and in positions where other external people might not.
Thought :know your enemy: would refer to the corporation investigative "When McKinsey comes to town" book (McKinsey's major involvements in lobbying for tobacco involvement, Opioid epidemic, and many more crimes left mostly unpunished).
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.
So 3 years at McKinsey taught OP the corporate BS. That paragraph doesn't say anything useful.
Yes it does, it's just dressed up in corporate speak:
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management.
We think we can be bigger (more customers, more sales, more money) than all existing players.
> We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably.
We're looking to eclipse the population of any one country and we're going to use something like Bitcoin to side-step national currencies (and maybe also to avoid existing regulatory structure, not clear from the ambiguous language).
> We plan to do with 100 people what Allianz and others do with 100,000.
We believe we can automate or use AI to eliminate the need for people to actually support these billion customers.
All three of those are very bold statements/goals.
There are different metrics that people use to say they're the biggest.
Some of them off the top of my head are number customers, number of active policies, premium amount, assets under management, time to claim resolution, etc. He's talking to business people who understand the insurance market.
Key claim: "disruption" is impossible for BigCompany. SmallCo can do it, but only if they both (1) have something technically hard to replicate, and (2) target a marketing niche that is a irremediable blind spot for BigCompany's. Since his venture now is life insurance, Geico is likely the comparable case in point.
I really think every founder (and startup worker) needs to take seriously the marketing side of the business, and not just believe that new technology will win.
(While I, too, am allergic to bitcoin scams, given increasing levels of political corruption monkeying with markets, rates, and regulation, I can also see it as an enticing alternative for those looking to get long-term investments off the dollar. For insurance, the main question is, will the money be there and be made available? Having seen even highly-regulated pensions fail (without federal insurance recourse in the case of religious hospital behemoths), I can see how technical guarantees independent of regulation or law could be compelling.)
I was confused by this as ChatGPT launched in Nov 2022 and had tens of millions of users by end of 2022.
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
The key phrase is LIFE Insurance, not HEALTH Insurance!
They are vastly different markets.
You don’t deny claims for life insurance as companies would do for health insurance. It’s a very different set of circumstances to have to deny life insurance.
Not responding about the article, but I remember interviewing with McKinsey as a graduating PhD. I had just passed their test and was going through the case study interview and I got paired with a PhD in physics from MIT. I think the study was something about cognac sales and I just got disgusted with the waste of training and talent, and after I got home from the airport that evening I pulled out of the interview process even though I had no other employment options at the time.
Somewhat ironically, over the past 20 years I’ve come to reject PhD-type career tracks after seeing how much PhD overproduction there is and how my older colleagues only had a BS or MS. These days, I yearn to leave my Big Tech job to start a “boring” business. Right now I’m taking Accounting 101 at a local university to understand business financials better.
My experience working at a consulting company was that we were a software development agency with change management. It can work well. However as a consultancy, the incentive is also to continually develop our integration into your organization so that you continue to need us.
Insurance business is mostly about hoarding and investing money so you can actually pay when you have to.
Unless you can solve that part of the problem as well as the big players, you will run into problems at some point, using extrem value theory you can even estimate when.
My understanding was that the "enemy" was McKinsey, a firm that has a reputation to me as being an expensive consulting firm filled with MBA types who frequently are hired by companies.
My understanding of this reputation is: This often happens at the detriment of either product quality or employee satisfaction. It's debatable if they actually have a reputation of providing value. I think short term? Maybe, albeit expensive. Long term? I'd say no.
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
throw10920|9 months ago
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.
Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, I see one very concrete moral problem:
that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
That, to me, is deeply disturbing, and very very difficult to justify.
burningChrome|9 months ago
Real world evidence supporting your argument:
United Health Group is currently embroiled in a class action lawsuit pertaining to using AI to auto-deny health care claims and procedures:
The plaintiffs are members who were denied benefit coverage. They claim in the lawsuit that the use of AI to evaluate claims for post-acute care resulted in denials, which in turn led to worsening health for the patients and in some cases resulted in death.
They said the AI program developed by UnitedHealth subsidiary naviHealth, nH Predict, would sometimes supersede physician judgement, and has a 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
https://www.healthcarefinancenews.com/news/class-action-laws...
siliconc0w|9 months ago
DiggyJohnson|9 months ago
Y_Y|9 months ago
lostlogin|9 months ago
Dealing with a machine is unlikely to be worse.
DrScientist|9 months ago
It's a common problem with automation - the focus is often on accelerating the 'happy' path, only to realise dealing with the exceptions is where the real challenges lie.
One tried and trust way around that is to cherry pick customers as part of your strategy. You sell insurance to people who will never claim ( and hence dispute), and shun those likely to.
However such market segmentation results in no insurance for people who would need it and the people who don't wondering why they are buying it - ie optimal efficiency for an insurance company is to simply offer no value at all.
ie you could argue the whole value proposition of an insurance company is to pool, not segmented risk, and critically to provide fair arbitration ( protecting the majority of the pool from those that would do insurance fraud, while still paying out ).
Buying 'peace of mind' requires a belief in a fair dealing insurer - that's the key scale challenge - not pricing or sales.
wnc3141|9 months ago
2) Having many firms serve a market is always better for consumers as well instead of a single firm. (with a few notable exceptions)
3) In terms of large scale, its impossible to scale efficiently across countries as you navigate new political and economic structures.
guywithahat|9 months ago
That said I suspect the founder is seriously overestimating the number of highly intelligent, competent people he can hire, and underestimating how much bureaucratic nonsense comes with insurance, but that's a problem he'll run into later down the road. Sometimes you have to hire three people with mediocre salaries because the sort of highly motivated competent person you want can't be found for the role.
crazygringo|9 months ago
I don't know. Given the human beings I've interacted with in customer support, and the number of times I've had to escalate because they were quite simply "intelligence-challenged" who couldn't even understand my issues, I'm not sure this is a bad thing.
In my limited experience with AI agents, they've been far more helpful and far faster, they actually seem to understand the issue immediately, and then either give me the solution (i.e. the obscure fact I needed in a support PDF that no regular rep would probably ever have known) or escalate me immediately to the actual right person who can help.
And regular humans will stonewall you anyway, if that's corporate policy. And then you go to the courts.
watwut|9 months ago
unknown|9 months ago
[deleted]
jajko|9 months ago
Author ignores the fact that any normal market there are variously priced insurances, yet somehow not all people flock to the cheapest one, in contrary (at least where I live). Higher fees mean ie less stressful life when dealing with insurer.
ttoinou|9 months ago
> Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job
Less work is... good ? Ethics are positive here. More work, more pain
jjk7|9 months ago
energy123|9 months ago
Economic productivity putting people out of jobs is both good and necessary and it is unethical to work against it.
sieabahlpark|9 months ago
[deleted]
xp84|9 months ago
There's a huge assumption in your comment -- that having 100,000 employees necessarily guarantees (or even makes likely) that you will have some human to help you.
More likely, those 100,000 humans are mostly working on sales and marketing, and the few allocated to support are all incentivized to avoid you, and to send you canned answers. A reasonably decent AI would be better at customer support than most companies give, since it'll have the same rules and policies to operate with, but will most likely be able to speak and write coherently in the language I speak.
CGMthrowaway|9 months ago
The Catholic church has 1B "customers" and seems to be doing ok with human-to-human interaction without the need (or desire) for AI. They do so via ~ 500K priests and another 4M lay ministers
johnobrien1010|9 months ago
For the record, that strikes me as seriously improper. Life insurance is a heavily regulated offering intended to provide security to families. It is the opposite of bitcoin, which is a highly speculative investment asset. Those two things should not be mixed.
Also, the fact that the disclosure seems to limit sales to being only occurring in Bermuda seems intentional. I suspect that this product would be highly illegal in most if not all US states, so they must offer this only for sale in Bermuda to avoid that issue.
verall|9 months ago
> You can borrow Bitcoin against your policy, and the borrowed BTC adopts a cost basis at the time of the loan. So if BTC were to 10x after you fund your policy, you could borrow a Bitcoin from Meanwhile at the 10x higher cost basis—meaning you could sell that BTC immediately and not owe any capital gains tax on that 10x of appreciation
n_ary|9 months ago
Fomite|9 months ago
You can take the founder out of a consultancy, but you can't take the consultancy out of the founder.
account-5|9 months ago
comrade1234|9 months ago
The junior guy started crying in the meeting. Like just blubbering. My wife still feels bad for it but still…
Weird thing, instead of firing him McKinsey kept him and stipulated that he can only be in meetings when the partner is present.
SJC_Hacker|9 months ago
Get at least a few years work experience and call me. Or alternatively, start your own dang business if you are really that smart.
frankfrank13|9 months ago
1. This BA/Asc was on <4 hours of sleep, maybe many days in a row
2. They walked into that meeting thinking they had completed exactly what the client (your wife) wanted
And after the meeting (this I feel more confident about, as it happens a lot)
1. A conversation happened to see if the BA/Asc wanted to stay on the project
2. They said yes, and the leadership decided that the best way to make this person feel safe was to always have a more experienced person in the room to deal with hiccups (in this case, the perception of low quality work)
Isn't that... good? What else would you expect
biker142541|9 months ago
chollida1|9 months ago
Why would they fire him after a singe incident?
Sounds like McKinsey is a more companionate organization than you, and that's saying something:)
yodsanklai|9 months ago
watwut|9 months ago
shitpostbot|9 months ago
[deleted]
JohnMakin|9 months ago
nforgerit|9 months ago
They essentially lied about any anticipated KPI potentials and let their "tech" people put together a 15k EUR/month (before public release) platform on AWS which was such a pile of mess, it made the second year's CTO start from scratch. After some heavy arguments because of their poor performance, McKinsey agreed to let some "non-technical" people work there for a couple of months for free. All arguments you'd had with the McKinsey "Engineers" felt like talking to AWS Sales, they had barely any technical insights but a catalog of "pre-made solutions" to choose from.
whistle650|9 months ago
dzink|9 months ago
MPSFounder|9 months ago
greatgib|9 months ago
It is not malevolence but more a deficiency by design. First as you said, most consultants are not real "leaders" or "tech leaders" and when you start to get experience, you leave consulting or you raise the hierarchical ladder to become a more senior manager, that spend more time finding contracts, negotiating, dealing with the customer proposals and renewal than doing the actual job.
In the end, you have juniors that are doing the tasks pretending to be sector experts or like the guy in the article, you are propulsed "CEO" of the big entity of a big corp just after 4 or 5 years of basic consultant experience, not even having worked in real non-conducting job.
In the end, when you buy a mission to such a firm, most of the cost goes to structural costs and daily rate of a chain of useless parasitic executives like directors, executive partners, vice president, that will spend 10 minutes per month reviewing slides on the project. The consulting doing the job will be paid at most the double of what a good freelance can expect.
And regarding the spirit, the fun thing is that even when you do bad or evilish things, there is kind of a mental block brain process that makes you truly believe that you are doing an useful and much needed work. Despite it not the case.
In my own case, often I had this bad feeling deep in. A background that a customer that we were negotiating a mission for millions, could have just hired a decent developer for just a few thousands euros and have quite profited from a most good and successful system.
But like, in Matrix, when you are inside the system, it is hard to consider things outside the box.
In the same way that again in the example of the blog post, I'm quite sure that the big company would been able to find hundreds of existing employees that would have fitted the bill well and better for a lot cheaper.
throwanem|9 months ago
tiffanyh|9 months ago
Having worked in highly regulated industries, I’ve learned that the best way to disrupt incumbents is by creating a product that assumes more business risk than is typically accepted. Large, regulated companies are extremely risk-averse—so if you can take on that risk in a smart, innovative way, you’ll win.
Alpha3031|9 months ago
phendrenad2|9 months ago
The fact that the company has become a sort of pseudo-VC (mentorship but not financing) for small teams within megacorps is interesting. I wonder why large corps find it so difficult to innovate. I think that they become somewhat "load-bearing" in society and the lines between the company and the market begin to blur. Any change the company makes causes a misalignment because they shaped the market to fit themselves.
mettamage|9 months ago
Now that I'm working at a big organization (a Fortune 500 company), I can relate. I'm by far the most innovative person in my team and I'm being held down because I'm not doing my role (as I'm not a dev but a data analyst at the moment).
If I'd be doing my role however, then we wouldn't be innovating and the C-suite wants us to innovate with AI. I'm the only one at my department that can create actual AI automations. And the IT department is basically stripped out by upper management.
If anyone wants an actual dev building AI automations and think how we can disrupt with the state of the art, my email is in my profile.
TrackerFF|9 months ago
mentalgear|9 months ago
Oras|9 months ago
So 3 years at McKinsey taught OP the corporate BS. That paragraph doesn't say anything useful.
jdbernard|9 months ago
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management.
We think we can be bigger (more customers, more sales, more money) than all existing players.
> We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably.
We're looking to eclipse the population of any one country and we're going to use something like Bitcoin to side-step national currencies (and maybe also to avoid existing regulatory structure, not clear from the ambiguous language).
> We plan to do with 100 people what Allianz and others do with 100,000.
We believe we can automate or use AI to eliminate the need for people to actually support these billion customers.
All three of those are very bold statements/goals.
mannyv|9 months ago
Some of them off the top of my head are number customers, number of active policies, premium amount, assets under management, time to claim resolution, etc. He's talking to business people who understand the insurance market.
w10-1|9 months ago
I really think every founder (and startup worker) needs to take seriously the marketing side of the business, and not just believe that new technology will win.
(While I, too, am allergic to bitcoin scams, given increasing levels of political corruption monkeying with markets, rates, and regulation, I can also see it as an enticing alternative for those looking to get long-term investments off the dollar. For insurance, the main question is, will the money be there and be made available? Having seen even highly-regulated pensions fail (without federal insurance recourse in the case of religious hospital behemoths), I can see how technical guarantees independent of regulation or law could be compelling.)
Lienetic|9 months ago
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
perhaps a typo in year?
MR4D|9 months ago
The key phrase is LIFE Insurance, not HEALTH Insurance!
They are vastly different markets.
You don’t deny claims for life insurance as companies would do for health insurance. It’s a very different set of circumstances to have to deny life insurance.
georgeburdell|9 months ago
Somewhat ironically, over the past 20 years I’ve come to reject PhD-type career tracks after seeing how much PhD overproduction there is and how my older colleagues only had a BS or MS. These days, I yearn to leave my Big Tech job to start a “boring” business. Right now I’m taking Accounting 101 at a local university to understand business financials better.
unknown|9 months ago
[deleted]
liampulles|9 months ago
Very much a symbiotic vs parasitic relationship.
niemandhier|9 months ago
Unless you can solve that part of the problem as well as the big players, you will run into problems at some point, using extrem value theory you can even estimate when.
kwere|9 months ago
In Crypto this mean rugpull time
SoftTalker|9 months ago
davidjfelix|9 months ago
My understanding of this reputation is: This often happens at the detriment of either product quality or employee satisfaction. It's debatable if they actually have a reputation of providing value. I think short term? Maybe, albeit expensive. Long term? I'd say no.
account-5|9 months ago
vonnik|9 months ago
one nitpick:
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
ChatGPT launched in late 2022...
zt|9 months ago
throwaway743|9 months ago
rafelolszewski|9 months ago
[deleted]