top | item 46857528

(no title)

CMay | 27 days ago

1. Find out what Co-pilot's reputation is among power users.

2. Realize that Co-pilot is bad and needs to improve up to Microsoft's highest gold standards of trustworthiness.

3. Ditch Co-pilot branding inside the OS.

4. Make AI features private and offline by default unless the local hardware cannot run the specialized tiny model for that task, at which point it goes online for it. It might be slower, but if it does the thing, it's ok.

5. Allow companies and power users to provide their own local models that hook into these tasks, so they can host AI servers within the company and these AI tasks never reach outside of the company.

6. Make AI features more specific, targeted and useful instead of simply integrating it into the various functions and throwing it at users like "here, you figure out what to do with this thing, we don't know."

7. Don't expect people to want to chat with it in every app, just find a task that you know it succeeds at and expose that task rather than letting users figure out what it sucks at.

8. Don't make the AI integration APIs a case of increased surface area privacy and security risk that 3rd party system apps can hook into, to mass extract information out of every app on your system easily. Put limitations on it.

9. Add features to specify where AI can go and cannot go, just like the microphone. Folders, apps, online services. Even if it does use Co-pilot online, let users sculpt it.

10. Make it explicit and obvious when AI features are operating offline or online. If users have decades of understanding that Notepad is a private offline app, preserve that expectation as much as possible. Just because Outlook and OneNote are very online-oriented apps, it doesn't mean they want their local experience to be online in every way. If you force AI to go over all my cloud files, notes and e-mail without my permission, that is sociopathic behavior and I will ditch you, Microsoft.

Some day Co-pilot will probably be good. That isn't today. It's probably not this year or next year, but eventually. Until then, it needs to stay in a lane with freshly painted lines surrounded with sand barrels in case it wrecks.

It's not that I'm entirely opposed to some Microsoft AI feature existing in Windows, but manufacturing a user assumption that it is everywhere all the time is bad not just for Windows, but for society as a whole.

We've already seen how political and activist the public sphere became over the last decade, which reduces trust in the people who make software and services too. What do we do when Microsoft gets ideologically taken over and abuses its information access to people for political ends?

Show you can be trusted. When I put a little food bowl down for you, don't scratch me and we'll go from there.

discuss

order

No comments yet.