(no title)
topkai22 | 6 months ago
While the article presents cases that appear the be problematic in the particulars, I think coming to the conclusion that bosses/managers shouldn't be pushing or mandating the use of AI tools in general is incorrect.
It's quite possible that any one new AI tool is wrong, but it is unlikely all of them are. A great historical analogies are the adoption of PCs in the 80s and the adoption of the internet/web in the 90s. Not everything we tried back then was an improvement on existing technologies or processes but in general if you weren't experimenting across a broad swath of your business you were going to get left behind.
It's easy to defend the utility of these tools so long as you caveat them. For example, I've had a lot of success in AI driven code generation for utility scripts, but it is less useful for full fledged feature development in our main code base. AI driven code summarization and its ability to do coding standards enforcement on PRs is a huge help.
Finally, I find the worries in the article about using these tools on sensitive data or scenarios such as ideation to be rather overdrawn. They are just SaaS services. You shouldn't use the free version of most tools for business purposes due to often problematic licensing, but purchasing and legal should be able help find an appropriate service. After all, if you are using google docs or Microsoft 365 to create and store your documents why would (at least with some due diligence that they don't retain or train on your input) you treat Gemini or Copilot (or their other LLM options) as presenting higher legal peril?
mgh95|6 months ago
There is a difference between experimentation and mandated usage, however. In the former, you typically see "shadow IT" attempt to access useful tools outside the bounds of what is considered acceptable, as compared to mandated usage. This indicates a greater willingness to adopt.
There is also a difference between a technology replicating an existing functionality in a new medium (email vs usps) and introduction of a new technology. In the former, there is clear market demand, and only a matter of redirecting existing demand to new tools. In the latter, it is unclear if the technology will be useful.
I don't think that just because LLMs are a new technology which use computing makes them the internet and I don't think it's accurate to analyze them through the lens you propose.
beezlewax|6 months ago
How so? I have access to a huge number of these tools and they're all pretty similar.
Azrael3000|6 months ago
soraminazuki|6 months ago
bigstrat2003|6 months ago
If the tool is good, then management won't need to mandate it. People will be tripping over themselves to get access to the tool that helps them to do their job better. So perhaps you're right that some of the tools will be good (though I personally haven't yet had that experience), but I think that it is incorrect for managers to push for (let alone mandate) tool usage. Measure the result, not the path an employee takes to get there. If Bob uses AI tools to great effect, but Alice is doing just as well as him without using said tools, it's a mistake to force her to change her workflow thinking that the tools will be just as good for her as for Bob.
pmg101|6 months ago
However this is a subtle and nuanced situation requiring careful people management and helping to nudge or lead people, letting them take risks, letting them fail, giving them psychological safety, and praising their attempts. Blanket mandates are just a very tone deaf and stupid way to try to achieve this.
makeitdouble|6 months ago
Another historical analogy is Scientific Management, pushed top down and widely adopted by the industry. It has many flavors and all of them were wrong.
We have samples in basically any direction one would like to argue for. Historical precedence isn't a good argument IMHO.
bitwize|6 months ago
EagnaIonat|6 months ago
All can absolutely be wrong at the same time, but the tool isn't the main issue IMHO. Its the user.
For simple generic stuff its not an issue, but where you need an expert, it has to be an expert in that field who uses the AI. So you know what is wrong.
A good recent example is the OpenAI Academy. Clearly the site content is generated by ChatGPT, and completely misses the point of the areas it claims to be training you in.