I was hoping for a moment that this meant they had come up with a design that was safe against lethal trifecta / prompt injection attacks, maybe by running everything in a tight sandbox and shutting down any exfiltration vectors that could be used by a malicious prompt attack to steal data.
Sadly they haven't completely solved that yet. Instead their help page at https://support.claude.com/en/articles/13364135-using-cowork... tells users "Avoid granting access to local files with sensitive information, like financial documents" and "Monitor Claude for suspicious actions that may indicate prompt injection".
(I don't think it's fair to ask non-technical users to look out for "suspicious actions that may indicate prompt injection" personally!)
> (I don't think it's fair to ask non-technical users to look out for "suspicious actions that may indicate prompt injection" personally!)
It's the "don't click on suspicious links" of the LLM world and will be just as effective. It's the system they built that should prevent those being harmful, in both cases.
It's so important to remember that unlike code which can be reverted - most file system and application operations cannot.
There's no sandboxing snapshot in revision history, rollbacks, or anything.
I expect to see many stories from parents, non-technical colleagues, and students who irreparably ruined their computer.
Edit: most comments are focused on pointing out that version control & file system snapshot exists: that's wonderful, but Claude Cowork does not use it.
For those of us who have built real systems at low levels I think the alarm bells go off seeing a tool like this - particularly one targeted at non-technical users
Frequency vs. convenience will determine how big of a deal this is in practice.
Cars have plenty of horror stories associated with them, but convenience keeps most people happily driving everyday without a second thought.
Google can quarantine your life with an account ban, but plenty of people still use gmail for everything despite the stories.
So even if Claude cowork can go off the rails and turn your digital life upside down, as long as the stories are just online or "friend of a friend of a friend", people won't care much.
Once upon a time, in the magical days of Windows 7, we had the Volume Shadow Copy Service (aka "Previous Versions") available by default, and it was so nice. I'm not using Windows anymore, and at least part of the reason is that it's just objectively less feature complete than it used to be 15 years ago.
Hi, Felix from the team here, this is my product - let us know what you think. We're on purpose releasing this very early, we expect to rapidly iterate on it.
(We're also battling an unrelated Opus 4.5 inference incident right now, so you might not see Cowork in your client right away.)
Your terms for Claude Max point to the consumer ToS. This ToS states it cannot be used for commercial purposes. Why is this? Why are you marketing a product clearly for business use and then have terms that strictly forbid it.
I’ve been trying to reach a human at Anthropic for a week now to clarify this on behalf of our company but can’t get past your AI support.
Anthropic blog posts have always caused a blank page for me, so I had Claude Code dig into it using an 11 MB HAR of a session that reproduces the problem, and it used grep and sed(!) to find the issue in just under 5 minutes (4m56s).
Turns out that the data-prevent-flicker attribute is never removed if the Intellimize script fails to load. I use DNS-based adblock and I can confirm that allowlisting api.intellimize.co solves the problem, but it would be great if this could be fixed for good, and I hope this helps.
People do realize that if they're doing this, they're not feeding "just" code into some probably logging cloud API but literally anything (including, as mentioned here, bank statements), right?
Right?
RIGHT??????
Are you sure that you need to grant the cloud full access to your desktop + all of its content to sort elements alphabetically?
The reality is there are some of us who truly just don't care. The convenience outweighs the negative. Yesterday I told an agent, "here's my api key and my root password - do it for me". Privacy has long since been dead, but at least for myself opsec for personal work is too.
It's really quite amazing that people would actually hook an AI company up to data that actually matters. I mean, we all know that they're only doing this to build a training data set to put your business out of business and capture all the value for themselves, right?
A few months ago I would have said that no, Anthropic make it very clear that they don't ever train on customer data - they even boasted about that in the Claude 3.5 Sonnet release back in 2024: https://www.anthropic.com/news/claude-3-5-sonnet
> One of the core constitutional principles that guides our AI model development is privacy. We do not train our generative models on user-submitted data unless a user gives us explicit permission to do so.
This sucks so much. Claude Code started nagging me for permission to train on my input the other day, and I said "no" but now I'm always going to be paranoid that I miss some opt-out somewhere and they start training on my input anyway.
And maybe that doesn't matter at all? But no AI lab has ever given me a convincing answer to the question "if I discuss company private strategy with your bot in January, how can you guarantee that a newly trained model that comes out in June won't answer questions about that to anyone who asks?"
I don't think that would happen, but I can't in good faith say to anyone else "that's not going to happen".
For any AI lab employees reading this: we need clarity! We need to know exactly what it means to "improve your products with your data" or whatever vague weasel-words the lawyers made you put in the terms of service.
> I mean, we all know that they're only doing this to build a training data set
That's not a problem. It leads to better models.
> to put your business out of business and capture all the value for themselves, right?
That's both true and paranoid. Yes, LLMs subsume most of the software industry, and many things downstream of it. There's little anyone can do about it; this is what happens when someone invents a brain on a chip. But no, LLM vendors aren't gunning for your business. They neither care, nor have the capability to perform if they did.
In fact my prediction is that LLM vendors will refrain from cannibalizing distinct businesses for as long as they can - because as long as they just offer API services (broad as they may be), they can charge rent from an increasingly large amount of the software industry. It's a goose that lays golden eggs - makes sense to keep it alive for as long as possible.
> They can and most likely will release something that vaporises the thin moat you have built around their product.
As they should if they're doing most of the heavy lifting.
And it's not just LLM adjacent startups at risk. LLMs have enabled any random person with a claude code subscription to pole vault over your drying up moat over the course of a weekend.
A CLI chat interface seems ideal for when you keep code "at a distance", i.e. if you hardly/infrequently/never want to peek at your code.
But for writing prose, I don't think chat-to-prose is ideal, i.e. most people would not want the keep prose "at a distance".
I bet most people want to be immersed in an editor where they are seeing how the text is evolving. Something like Zed's inline assistant, which I found myself using quite a lot when working on documents.
I was hoping that Cowork might have some elements of an immersive editor, but it's essentially transplanting the CLI chat experience to an ostensibly "less scary" interface, i.e., keeping the philosophy of artifacts separate from your chat.
I agree that for writing documents and for a lot of other things like editing csv files or mockups, I want to be immersed in the editor together with Claude Code, not in a chat separated from my editors
Hey, don't forget booking your flights! Because everyone who has ever flown knows it's very safe to let an RNG machine book something like a flight for you!
This looks useful for people not using Claude Code, but I do think that the desktop example in the video could be a bit misleading (particularly for non-developers) - Claude is definitely not taking screenshots of that desktop & organizing, it's using normal file management cli tools. The reason seems a bit obvious - it's much easier to read file names, types, etc. via an "ls" than try to infer via an image.
But it also gets to one of Claude's (Opus 4.5) current weaknesses - image understanding. Claude really isn't able to understand details of images in the same way that people currently can - this is also explained well with an analysis of Claude Plays Pokemon https://www.lesswrong.com/posts/u6Lacc7wx4yYkBQ3r/insights-i.... I think over the next few years we'll probably see all major LLM companies work on resolving these weaknesses & then LLMs using UIs will work significantly better (and eventually get to proper video stream understanding as well - not 'take a screenshot every 500ms' and call that video understanding).
I keep seeing “Claude image understanding is poor” being repeated, but I’ve experienced the opposite.
I was running some sentiment analysis experiments; describe the subject and the subjects emotional state kind of thing. It picked up on a lot of little detail; the brand name of my guitar amplifier in the background, what my t shirt said and that I must enjoy craft beer and or running (it was a craft beer 5k kind of thing), and picked up on my movement through multiple frames. This was a video slicing a frame every 500ms, it noticed me flexing, giving the finger, appearing happy, angry, etc.
I was really surprised how much it picked up on, and how well it connected those dots together.
> Claude is definitely not taking screenshots of that desktop & organizing, it's using normal file management cli tools
Are you sure about that?
Try "claude --chrome" with the CLI tool and watch what it does in the web browser.
It takes screenshots all the time to feed back into the multimodal vision and help it navigate.
It can look at the HTML or the JavaScript but Claude seems to find it "easier" to take a screenshot to find out what exactly is on the screen. Not parse the DOM.
So I don't know how Cowork does this, but there is no reason it couldn't be doing the same thing.
Agents for other people, this makes a ton of sense. Probably 30% of the time I use claude code in the terminal it's not actually to write any code.
For instance I use claude code to classify my expenses (given a bank statement CSV) for VAT reporting, and fill in the spreadsheet that my accountant sends me. Or for noting down line items for invoices and then generating those invoices at the end of the month. Or even booking a tennis court at a good time given which ones are available (some of the local ones are north/south facing which is a killer in the evening). All these tasks could be done at least as well outside the terminal, but the actual capability exists - and can only exist - on my computer alone.
I hope this will interact well with CLAUDE.md and .claude/skills and so forth. I have those files and skills scattered all over my filesystem, so I only have to write the background information for things once. I especially like having claude create CLIs and skills to use those CLIs. Now I only need to know what can be done, rather than how to do it - the “how” is now “ask Claude”.
It would be nice to see Cowork support them! (Edit: I see that the article mentions you can use your existing 'connectors' - MCP servers I believe - and that it comes with some skills. I haven't got access yet so I can't say if it can also use my existing skills on my filesystem…)
(Follow-up edit: it seems that while you can mount your whole filesystem and so forth in order to use your local skills, it uses a sandboxed shell, so your local commands (for example, tennis-club-cli) aren't available. It seems like the same environment that runs Claude Code on the Web. This limits the use for the moment, in my opinion. Though it certainly makes it a lot safer...)
Do the people rushing off to outsource their work to chatbots have a plan to explain to their bosses why they still need to have a job?
What's the play after you have automated yourselves out of a job?
Retrain as a skilled worker? Expect to be the lucky winner who is cahoots with the CEO/CTO and magically gets to keep the job? Expect the society to turn to social democracy and produce UBI? Make enough money to live off investments portfolio?
It's a little funny how the "Stay in control" section is mostly about how quickly you can lose control (deleting files, prompt injections). I can foresee non-technical users giving access to unfortunate folders and getting into a lot of trouble.
Is anybody out there actually being more productive in their office work by using AI like this? AI for writing code has been amazing but this office stuff is a really hard sell for me. General office/personal productivity seems to be the #1 use-case the industry is trying to sell but I just don't see it. What am I missing here?
This looks pretty cool. I keep seeing people (an am myself) using claude code for more an more _non-dev_ work. Managing different aspects of life, work, etc. Anthropic has built the best harness right now. Building out the UI makes sense to get genpop adoption
Yeah, the harness quality matters a lot. We're seeing the same pattern at Gobii - started building browser-native agents and quickly realized most of the interesting workflows aren't "code this feature" but "navigate this nightmare enterprise SaaS and do the thing I actually need done." The gap between what devs use Claude Code for vs. what everyone else needs is mostly just the interface.
This is the natural evolution of coding agents. They're the most likely to become general purpose agents that everyone uses for daily work because they have the most mature and comprehensive capability around tool use, especially on the filesystem, but also in opening browsers, searching the web, running programs (via command line for now), etc. They become your OS, colleague, and likely your "friend" too
I just helped a non-technical friend install one of these coding agents, because its the best way to use an AI model today that can do more than give him answers to questions. I'm not surprised to see this announced and I would expect the same to happen with all the code agents becoming generalized like this
The biggest challenge towards adoption is security and data loss. Prompt injection and social engineering are essentially the same thing, so I think prompt injection will have to be solved the same way. Data loss is easier to solve with a sandbox and backups. Regardless, I think for many the value of using general purpose agents will outweigh the security concerns for now, until those catch up
For those worried about irrevocable changes, sometimes a good plan is all the output.
Claude Code is very good at `doc = f(doc, incremental_input)` where doc is a code file. It's no different if doc is a _prompt file_ designed to encapsulate best practices.
Hand it a set of unstructured SOP documents, give it access to an MCP for your email, and have it gradually grow a set of skills that you can then bring together as a knowledge base auto-responder instruction-set.
Then, unlike many opaque "knowledge-base AI" products, you can inspect exactly how over-fitted those instructions are, and ask it to iterate.
What I haven't tried is whether Cowork will auto-compact as it goes through that data set, and/or take max-context-sized chunks and give them to a sub-agent who clears its memory between each chunk. Assuming it does, it could be immensely powerful for many use cases.
Under the hood, is this running shell commands (or Apple events) or is it actually clicking around in the UI?
If the latter, I'm a bit skeptical, as I haven't had great success with Claude's visual recognition. It regularly tells me there's nothing wrong with completely broken screenshots.
The thing about Claude code, is that it's usually used in version controlled directories. If Claude f**s up badly, I can revert to a previous git commit. If it runs amock on my office documents, I'm going to have a harder time recovering those.
Exciting to see Anthropic validate the "AI coworker" direction. We're building VITA AI (https://vita-ai.net) with similar philosophy but for enterprise QA testing.
One key architectural difference: Cowork runs sandboxed VMs on your local macOS machine, but we run sandboxes entirely in the cloud. This means:
- True isolation - agents never touch your local files or network, addressing the security concerns raised in this thread
- Actual autonomy - close your laptop, agent keeps working. Like delegating to a real coworker, not pairing with an assistant
- Scale - spin up 10 test agents without melting your CPU
The trade-off is latency and offline capability, but for testing workflows (our focus), asynchronous cloud execution is actually the desired model. You assign "test the checkout flow," go to lunch, come back to a full test report + artifacts.
Different use cases, different architectures. But the broader trend feels right - moving from conversational assistants to autonomous agents that operate independently.
I wrote up some first impressions of Claude Cowork here, including an example of it achieving a task for me (find the longest drafts in my blog-drafts folder from the past three months that I haven't published yet) with screenshots.
I tend to think this product is hard for those of us who've been using `claude` for a few months to evaluate. All I have seen and done so far with Cowork are things _I_ would prefer to do with the terminal, but for many people this might be their first taste of actually agentic workflows. Sometimes I wonder if Anthropic sort of regret releasing Claude Code in its 'runs your stuff on your computer' form - it can quite easily serve as so many other products they might have sold us separately instead!
I’ve tried just about every system for keeping my desktop tidy: folders, naming schemes, “I’ll clean it on Fridays,” you name it. They all fail for the same reason: the desktop is where creative work wants to spill out. It’s fast, visual, and forgiving. Cleaning it is slow, boring, and feels like admin.
Claude Cleaner, I mean Cowork will be sweeping my desktop every Friday.
Hmm. I'm building something (quick and dirty) at the moment that looks at analysing customer service data.
Something like this is promising but from what I can see, still lacking. So far I've been dealing with the regular issues (models aren't actually that smart, work with their strengths and weaknesses) but also more of the data problem - simple embeddings just aren't enough, imo. And throwing all of the data at the model is just asking for context poisoning, hallucinations and incorrect conclusions.
Been playing with instruction tuned embeddings/sentiment and almost building a sort of "multimodal" system of embedding to use with RAG/db calls. What I call "Data hiding" as well - allowing the model to see the shape of the data but not the data itself, except only when directly relevant.
This sounds really interesting. Perhaps this is the promise that Copilot was not. I'm really hoping that this gives people like my wife access to all the things I use Claude Code for.
I use Claude Code for everything. I have a short script in ~/bin/ called ,cc that I launch that starts it in an appropriate folder with permissions and contexts set up:
~ tree ~/claude-workspaces -d
/Users/george/claude-workspaces
├── context-creator
├── imessage
│ └── tmp
│ └── contacts-lookup
├── modeler
├── research
├── video
└── wiki
I'll usually pop into one of these (say, video) and say something stupid like: "Find the astra crawling video and stabilize it to focus on her and then convert into a GIF". That one knows it has to look in ~/Movies/Astra and it'll do the natural thing of searching for a file named crawl or something and then it'll go do the rest of the work.
Likewise, the `modeler` knows to create OpenSCAD files and so on, the `wiki` context knows that I use Mediawiki for my blog and have a Template:HackerNews and how to use it and so on. I find these make doing things a lot easier and, consequently, more fun.
All of this data is trusted information: i.e. it's from me so I know I'm not trying to screw myself. My wife is less familiar with the command-line so she doesn't use Claude Code as much as me, and prefers to use ChatGPT the web-app for which we've built a couple of custom GPTs so we can do things together.
Claude is such a good model that I really want to give my wife access to it for the stuff she does (she models in Blender). The day that these models get really good at using applications on our behalf will be wonderful! Here's an example model we made the other day for the game Power Grid: https://wiki.roshangeorge.dev/w/Blog/2026-01-11/Modeling_Wit...
This is a great idea! I'm building something very similar with https://practicalkit.com , which is the same concept done differently.
It will be interesting for me, trying to figure out how to differentiate from Claude Cowork in a meaningful way, but theres a lot of room here for competition, and no one application is likely to be "the best" at this. Having said that, I am sure Claude will be the category leader for quite a while, with first mover advantage.
I'm currently rolling out my alpha, and am looking for investment & partners.
I like this idea but really do not want to share my personal data to cloud based LLM vendors.
I have a folder which is controlled by Git, the folder contains various markdown files as my personal knowledge base and work planning files (It's a long story that I have gradually migrate from EverNote->OneNote->Obsidian->plain markdown files + Git), last time I tried to wire a Local LLM API(using LMStudio) to claude code/open code, and use the agent to analyze some documents, but the result is not quite good, either can't find the files or answer quality is bad.
I'm already using Claude Code to organize my work and life so this makes a lot of sense. However, I just tried it and it's not clear how this is different than using Claude with projects. I guess the main difference is that it can be used within a local folder on one's computer, so it's more integrated into ones workflow, rather than a project where you need to upload your data. This makes sense.
"Claude can’t read or edit anything you don’t give it explicit access to"
How confident are we that this is a strict measure?
I personally have zero confidence in Claude rulesets and settings as a way to fence it in. I've seen Claude decide desperately for itself what to access once it has context bloat? It can tend to ignore rules?
Unless there is a OS level restriction they are adhering to?
I've been working with a claude-specific directory in Claude Code for non-coding work (and the odd bit of coding/documentation stuff) since the first week of Claude Code, or even earlier - I think when filesystem MCP dropped.
It's a very powerful way to work on all kinds of things. V. interested to try co-work when it drops to Plus subscribers.
This is cool, but Claude for Chrome seems broken - authentication doesn't work and there's a slew of recent reviews on the Chrome extension mentioning it.
Sharing here in case anybody from Anthropic sees and can help get this working again.
It may seem off-topic, but I think it hurts developer trust to launch new apps while old ones are busted.
In my opinion, these things are better run the cloud to ensure you have a properly sandboxed, recoverable environment.
At this point, I am convinced that almost anyone heavily relaying on desktop chat application has far too many credentials scattered on the file system ready to be grabbed and exploited.
Cowork feels like a real step toward usable agent AI — letting Claude actually interact with your files rather than just answer questions. But that also means we’ll really learn how robust (and safe) this stuff is once people start trying it on messy, real workflows instead of toy tasks.
I need to go and do some proper timings but for comparable questions and inputs this feels a lot faster. Possible I’m just being beguiled by the UI but it does seem as though the responses are coming back faster.
Is it possible this gets access to a faster API tier?
A lot of people here are discussing the security challenges here. If you're interested I'm working on a novel solution to the security of these systems.
Basic ideas are minimal privilege per task in a minimal and contained environment for everything and heavy control over all actions AI is performing. AI can performs tasks without seeing any of your personal information in the process. A new kind of orchestration and privacy layer for zero trust agentic actions.
Redactsure.com
From this feed I figured I'd plug my system, would love your feedback! I beleive we are building out a real solution to these security and privacy concerns.
While the entire field is early I do believe systems like my own and others will make these products safe and reliable in the near future.
Is there anything similar to this in the local world? I’m setting up a full local “ai” stack on a 48gb MacBook for my sensitive data ops. Using webui. Will still use sota cloud services for coding.
When I need to create something like a powerpoint or whatever I use claude code and invoke a claude skill that knows how to do it. Why would I use claude cowork instead of that?
A week ago I pitched to my managers that this form of general purpose claude code will come out soon. They were rather skeptical saying that claude code is just for developers. Now they can see.
This product barely works. It can't connect to the browser extension and when I share folders for it to access, nothing happens. I love early previews but maybe one more week?
This is interesting because in the other thread about Anthropic/Claude Code, people are arguing that Anthropic is right to focus on what CC is good at (writing code).
I use Claude 8+ hours per day. But this is probably the scariest use I can think of. An agent running with full privileges with no restriction. What can go wrong?
Isn't this just a UI over Claude Code? For most people, using the terminal means you could switch to many different coding CLIs and not be locked into just Claude.
I tried to get Claude to build me a spreadsheet last night. I was explicit in that I wanted an excel file.
It’s made one in the past for me with some errors, but a framework I could work with.
It created an “interactive artifact” that wouldn’t work in the browser or their apps. Gaslit me for 3 revisions of me asking why it wasn’t working.
Created a text file that it wanted me to save as a .csv to import into excel that failed hilariously.
When I asked it to convert the csv to an excel file it apologized and told me it was ready. No file to download.
I asked where the file was and it apologized again and told me it couldn’t actually do spreadsheets and at that point I was out of paid credits for 4 more hours.
Really like the look of this. I use Claude Code (and other CLI LLM tools) to interact with my large collection of local text files which I usually use Obsidian to write/update. It has been awesome at organization, summarization, and other tasks that were previously really time consuming.
Bringing that type of functionality to a wider audience and out of the CLI could be really cool!
Some comments were deferred for faster rendering.
simonw|1 month ago
Sadly they haven't completely solved that yet. Instead their help page at https://support.claude.com/en/articles/13364135-using-cowork... tells users "Avoid granting access to local files with sensitive information, like financial documents" and "Monitor Claude for suspicious actions that may indicate prompt injection".
(I don't think it's fair to ask non-technical users to look out for "suspicious actions that may indicate prompt injection" personally!)
felixrieseberg|1 month ago
There is much more to do - and our docs reflect how early this is - but we're investing in making progress towards something that's "safe".
viraptor|1 month ago
It's the "don't click on suspicious links" of the LLM world and will be just as effective. It's the system they built that should prevent those being harmful, in both cases.
cyanydeez|1 month ago
This is a perfect encapsulation of the same problem: https://www.reddit.com/r/BrandNewSentence/comments/jx7w1z/th...
Substitute AI with Bear
jryio|1 month ago
There's no sandboxing snapshot in revision history, rollbacks, or anything.
I expect to see many stories from parents, non-technical colleagues, and students who irreparably ruined their computer.
Edit: most comments are focused on pointing out that version control & file system snapshot exists: that's wonderful, but Claude Cowork does not use it.
For those of us who have built real systems at low levels I think the alarm bells go off seeing a tool like this - particularly one targeted at non-technical users
Workaccount2|1 month ago
Cars have plenty of horror stories associated with them, but convenience keeps most people happily driving everyday without a second thought.
Google can quarantine your life with an account ban, but plenty of people still use gmail for everything despite the stories.
So even if Claude cowork can go off the rails and turn your digital life upside down, as long as the stories are just online or "friend of a friend of a friend", people won't care much.
alwillis|1 month ago
[1]: https://eclecticlight.co/2024/04/08/apfs-snapshots/
[2]: https://eclecticlight.co/2021/09/04/explainer-the-macos-vers...
falcor84|1 month ago
felixrieseberg|1 month ago
(We're also battling an unrelated Opus 4.5 inference incident right now, so you might not see Cowork in your client right away.)
deanc|1 month ago
I’ve been trying to reach a human at Anthropic for a week now to clarify this on behalf of our company but can’t get past your AI support.
bashtoni|1 month ago
Simple suggestion: logo should be a cow and and orc to match how I originally read the product name.
1f60c|1 month ago
Turns out that the data-prevent-flicker attribute is never removed if the Intellimize script fails to load. I use DNS-based adblock and I can confirm that allowlisting api.intellimize.co solves the problem, but it would be great if this could be fixed for good, and I hope this helps.
h4ch1|1 month ago
https://github.com/thameera/harcleaner and https://har-sanitizer.pages.dev/
lelandfe|1 month ago
To bypass: `.transition_wrap { display: none }`
hypfer|1 month ago
Right?
RIGHT??????
Are you sure that you need to grant the cloud full access to your desktop + all of its content to sort elements alphabetically?
jjcm|1 month ago
The reality is there are some of us who truly just don't care. The convenience outweighs the negative. Yesterday I told an agent, "here's my api key and my root password - do it for me". Privacy has long since been dead, but at least for myself opsec for personal work is too.
AstroBen|1 month ago
motoboi|1 month ago
cc62cf4a4f20|1 month ago
simonw|1 month ago
> One of the core constitutional principles that guides our AI model development is privacy. We do not train our generative models on user-submitted data unless a user gives us explicit permission to do so.
But they changed their policy a few months ago so now as-of October they are much more likely to train on your inputs unless you've explicitly opted out: https://www.anthropic.com/news/updates-to-our-consumer-terms
This sucks so much. Claude Code started nagging me for permission to train on my input the other day, and I said "no" but now I'm always going to be paranoid that I miss some opt-out somewhere and they start training on my input anyway.
And maybe that doesn't matter at all? But no AI lab has ever given me a convincing answer to the question "if I discuss company private strategy with your bot in January, how can you guarantee that a newly trained model that comes out in June won't answer questions about that to anyone who asks?"
I don't think that would happen, but I can't in good faith say to anyone else "that's not going to happen".
For any AI lab employees reading this: we need clarity! We need to know exactly what it means to "improve your products with your data" or whatever vague weasel-words the lawyers made you put in the terms of service.
TeMPOraL|1 month ago
That's not a problem. It leads to better models.
> to put your business out of business and capture all the value for themselves, right?
That's both true and paranoid. Yes, LLMs subsume most of the software industry, and many things downstream of it. There's little anyone can do about it; this is what happens when someone invents a brain on a chip. But no, LLM vendors aren't gunning for your business. They neither care, nor have the capability to perform if they did.
In fact my prediction is that LLM vendors will refrain from cannibalizing distinct businesses for as long as they can - because as long as they just offer API services (broad as they may be), they can charge rent from an increasingly large amount of the software industry. It's a goose that lays golden eggs - makes sense to keep it alive for as long as possible.
Imnimo|1 month ago
What do the words "if it's instructed to" mean here? It seems like Claude can in fact delete files whenever it wants regardless of instruction.
For example, in the video demonstration, they ask "Please help me organize my desktop", and Claude decides to delete files.
olliepro|1 month ago
ossa-ma|1 month ago
They can and most likely will release something that vaporises the thin moat you have built around their product.
This feels like the first time in tech where there are more startups/products being subsumed (agar.io style) than being created.
xlbuttplug2|1 month ago
As they should if they're doing most of the heavy lifting.
And it's not just LLM adjacent startups at risk. LLMs have enabled any random person with a claude code subscription to pole vault over your drying up moat over the course of a weekend.
dcchambers|1 month ago
There will always be a market for dedicated tools that do really specific things REALLY well.
d4rkp4ttern|1 month ago
But for writing prose, I don't think chat-to-prose is ideal, i.e. most people would not want the keep prose "at a distance".
I bet most people want to be immersed in an editor where they are seeing how the text is evolving. Something like Zed's inline assistant, which I found myself using quite a lot when working on documents.
I was hoping that Cowork might have some elements of an immersive editor, but it's essentially transplanting the CLI chat experience to an ostensibly "less scary" interface, i.e., keeping the philosophy of artifacts separate from your chat.
wek|1 month ago
exitb|1 month ago
sensanaty|1 month ago
falloutx|1 month ago
Flux159|1 month ago
But it also gets to one of Claude's (Opus 4.5) current weaknesses - image understanding. Claude really isn't able to understand details of images in the same way that people currently can - this is also explained well with an analysis of Claude Plays Pokemon https://www.lesswrong.com/posts/u6Lacc7wx4yYkBQ3r/insights-i.... I think over the next few years we'll probably see all major LLM companies work on resolving these weaknesses & then LLMs using UIs will work significantly better (and eventually get to proper video stream understanding as well - not 'take a screenshot every 500ms' and call that video understanding).
ElatedOwl|1 month ago
I was running some sentiment analysis experiments; describe the subject and the subjects emotional state kind of thing. It picked up on a lot of little detail; the brand name of my guitar amplifier in the background, what my t shirt said and that I must enjoy craft beer and or running (it was a craft beer 5k kind of thing), and picked up on my movement through multiple frames. This was a video slicing a frame every 500ms, it noticed me flexing, giving the finger, appearing happy, angry, etc. I was really surprised how much it picked up on, and how well it connected those dots together.
EMM_386|1 month ago
Are you sure about that?
Try "claude --chrome" with the CLI tool and watch what it does in the web browser.
It takes screenshots all the time to feed back into the multimodal vision and help it navigate.
It can look at the HTML or the JavaScript but Claude seems to find it "easier" to take a screenshot to find out what exactly is on the screen. Not parse the DOM.
So I don't know how Cowork does this, but there is no reason it couldn't be doing the same thing.
redfloatplane|1 month ago
For instance I use claude code to classify my expenses (given a bank statement CSV) for VAT reporting, and fill in the spreadsheet that my accountant sends me. Or for noting down line items for invoices and then generating those invoices at the end of the month. Or even booking a tennis court at a good time given which ones are available (some of the local ones are north/south facing which is a killer in the evening). All these tasks could be done at least as well outside the terminal, but the actual capability exists - and can only exist - on my computer alone.
I hope this will interact well with CLAUDE.md and .claude/skills and so forth. I have those files and skills scattered all over my filesystem, so I only have to write the background information for things once. I especially like having claude create CLIs and skills to use those CLIs. Now I only need to know what can be done, rather than how to do it - the “how” is now “ask Claude”.
It would be nice to see Cowork support them! (Edit: I see that the article mentions you can use your existing 'connectors' - MCP servers I believe - and that it comes with some skills. I haven't got access yet so I can't say if it can also use my existing skills on my filesystem…)
(Follow-up edit: it seems that while you can mount your whole filesystem and so forth in order to use your local skills, it uses a sandboxed shell, so your local commands (for example, tennis-club-cli) aren't available. It seems like the same environment that runs Claude Code on the Web. This limits the use for the moment, in my opinion. Though it certainly makes it a lot safer...)
samiv|1 month ago
What's the play after you have automated yourselves out of a job?
Retrain as a skilled worker? Expect to be the lucky winner who is cahoots with the CEO/CTO and magically gets to keep the job? Expect the society to turn to social democracy and produce UBI? Make enough money to live off investments portfolio?
Davidzheng|1 month ago
delegate|1 month ago
forty|1 month ago
sunaookami|1 month ago
flyingzucchini|1 month ago
jfletch321|1 month ago
cwoolfe|1 month ago
tacoooooooo|1 month ago
ai-christianson|1 month ago
tolerance|1 month ago
elpakal|1 month ago
steipete|1 month ago
jameslk|1 month ago
I just helped a non-technical friend install one of these coding agents, because its the best way to use an AI model today that can do more than give him answers to questions. I'm not surprised to see this announced and I would expect the same to happen with all the code agents becoming generalized like this
The biggest challenge towards adoption is security and data loss. Prompt injection and social engineering are essentially the same thing, so I think prompt injection will have to be solved the same way. Data loss is easier to solve with a sandbox and backups. Regardless, I think for many the value of using general purpose agents will outweigh the security concerns for now, until those catch up
unknown|1 month ago
[deleted]
btown|1 month ago
Claude Code is very good at `doc = f(doc, incremental_input)` where doc is a code file. It's no different if doc is a _prompt file_ designed to encapsulate best practices.
Hand it a set of unstructured SOP documents, give it access to an MCP for your email, and have it gradually grow a set of skills that you can then bring together as a knowledge base auto-responder instruction-set.
Then, unlike many opaque "knowledge-base AI" products, you can inspect exactly how over-fitted those instructions are, and ask it to iterate.
What I haven't tried is whether Cowork will auto-compact as it goes through that data set, and/or take max-context-sized chunks and give them to a sub-agent who clears its memory between each chunk. Assuming it does, it could be immensely powerful for many use cases.
Wowfunhappy|1 month ago
If the latter, I'm a bit skeptical, as I haven't had great success with Claude's visual recognition. It regularly tells me there's nothing wrong with completely broken screenshots.
falloutx|1 month ago
Is it that hard to check your calendar? Also feels insincere to have a meeting of say 30 mins to show a claude made deck that you did it in 4 seconds.
cwoolfe|1 month ago
lossolo|1 month ago
1. https://www.youtube.com/watch?v=Q7NZK6h9Tvo
appsoftware|1 month ago
jdeng|1 month ago
One key architectural difference: Cowork runs sandboxed VMs on your local macOS machine, but we run sandboxes entirely in the cloud. This means:
- True isolation - agents never touch your local files or network, addressing the security concerns raised in this thread
- Actual autonomy - close your laptop, agent keeps working. Like delegating to a real coworker, not pairing with an assistant
- Scale - spin up 10 test agents without melting your CPU
The trade-off is latency and offline capability, but for testing workflows (our focus), asynchronous cloud execution is actually the desired model. You assign "test the checkout flow," go to lunch, come back to a full test report + artifacts.
Different use cases, different architectures. But the broader trend feels right - moving from conversational assistants to autonomous agents that operate independently.
alexdobrenko|1 month ago
Cowork is the nice version. The "here's a safe folder for Claude to play in" version. Which is great! Genuinely. More people should try this.
But!!! The terminal lets you do more. It always will. That's just how it works.
And when Cowork catches up, you'll want to go further. The gap doesn't close. It just moves.
All of this, though, is good? I think??
energy123|1 month ago
simonw|1 month ago
https://simonwillison.net/2026/Jan/12/claude-cowork/
redfloatplane|1 month ago
krm01|1 month ago
Claude Cleaner, I mean Cowork will be sweeping my desktop every Friday.
Im sure itll be useful for more stuff but man…
hmokiguess|1 month ago
fennecfoxy|1 month ago
Something like this is promising but from what I can see, still lacking. So far I've been dealing with the regular issues (models aren't actually that smart, work with their strengths and weaknesses) but also more of the data problem - simple embeddings just aren't enough, imo. And throwing all of the data at the model is just asking for context poisoning, hallucinations and incorrect conclusions.
Been playing with instruction tuned embeddings/sentiment and almost building a sort of "multimodal" system of embedding to use with RAG/db calls. What I call "Data hiding" as well - allowing the model to see the shape of the data but not the data itself, except only when directly relevant.
arjie|1 month ago
I use Claude Code for everything. I have a short script in ~/bin/ called ,cc that I launch that starts it in an appropriate folder with permissions and contexts set up:
I'll usually pop into one of these (say, video) and say something stupid like: "Find the astra crawling video and stabilize it to focus on her and then convert into a GIF". That one knows it has to look in ~/Movies/Astra and it'll do the natural thing of searching for a file named crawl or something and then it'll go do the rest of the work.Likewise, the `modeler` knows to create OpenSCAD files and so on, the `wiki` context knows that I use Mediawiki for my blog and have a Template:HackerNews and how to use it and so on. I find these make doing things a lot easier and, consequently, more fun.
All of this data is trusted information: i.e. it's from me so I know I'm not trying to screw myself. My wife is less familiar with the command-line so she doesn't use Claude Code as much as me, and prefers to use ChatGPT the web-app for which we've built a couple of custom GPTs so we can do things together.
Claude is such a good model that I really want to give my wife access to it for the stuff she does (she models in Blender). The day that these models get really good at using applications on our behalf will be wonderful! Here's an example model we made the other day for the game Power Grid: https://wiki.roshangeorge.dev/w/Blog/2026-01-11/Modeling_Wit...
monarchwadia|1 month ago
It will be interesting for me, trying to figure out how to differentiate from Claude Cowork in a meaningful way, but theres a lot of room here for competition, and no one application is likely to be "the best" at this. Having said that, I am sure Claude will be the category leader for quite a while, with first mover advantage.
I'm currently rolling out my alpha, and am looking for investment & partners.
mintflow|1 month ago
I have a folder which is controlled by Git, the folder contains various markdown files as my personal knowledge base and work planning files (It's a long story that I have gradually migrate from EverNote->OneNote->Obsidian->plain markdown files + Git), last time I tried to wire a Local LLM API(using LMStudio) to claude code/open code, and use the agent to analyze some documents, but the result is not quite good, either can't find the files or answer quality is bad.
unknown|1 month ago
[deleted]
break_the_bank|1 month ago
Try it https://tabtabtab.ai
Would love some feedback!
tinyhouse|1 month ago
slimebot80|1 month ago
How confident are we that this is a strict measure?
I personally have zero confidence in Claude rulesets and settings as a way to fence it in. I've seen Claude decide desperately for itself what to access once it has context bloat? It can tend to ignore rules?
Unless there is a OS level restriction they are adhering to?
jpcompartir|1 month ago
It's a very powerful way to work on all kinds of things. V. interested to try co-work when it drops to Plus subscribers.
philip1209|1 month ago
Sharing here in case anybody from Anthropic sees and can help get this working again.
It may seem off-topic, but I think it hurts developer trust to launch new apps while old ones are busted.
unknown|1 month ago
[deleted]
_pdp_|1 month ago
In my opinion, these things are better run the cloud to ensure you have a properly sandboxed, recoverable environment.
At this point, I am convinced that almost anyone heavily relaying on desktop chat application has far too many credentials scattered on the file system ready to be grabbed and exploited.
codebyaditya|1 month ago
spm1001|1 month ago
Is it possible this gets access to a faster API tier?
majormajor|1 month ago
1) Read meeting transcripts 2) Pull out key points 3) Find action items 4) Check Google Calendar 5) Build standup deck
feels like "how to put yourself out of a job 101."
It's interesting to see the marketing material be so straightforward about that.
unknown|1 month ago
[deleted]
mceachen|1 month ago
Olshansky|1 month ago
Unsure what the future looks like unless Frontier Labs start financing everything that is open source.
sergiotapia|1 month ago
unknown|1 month ago
[deleted]
unknown|1 month ago
[deleted]
redactsureAI|1 month ago
Basic ideas are minimal privilege per task in a minimal and contained environment for everything and heavy control over all actions AI is performing. AI can performs tasks without seeing any of your personal information in the process. A new kind of orchestration and privacy layer for zero trust agentic actions.
Redactsure.com
From this feed I figured I'd plug my system, would love your feedback! I beleive we are building out a real solution to these security and privacy concerns.
While the entire field is early I do believe systems like my own and others will make these products safe and reliable in the near future.
bahmboo|1 month ago
kingkongjaffa|1 month ago
sbinnee|1 month ago
mrcwinn|1 month ago
ambicapter|1 month ago
system2|1 month ago
theturtletalks|1 month ago
thiagowfx|1 month ago
On the other hand, it’s not “Claude Coder”, then it’s at least consistent.
lasgawe|1 month ago
rao-v|1 month ago
unknown|1 month ago
[deleted]
kewun|1 month ago
StarterPro|1 month ago
tolodot|1 month ago
rshanreddy|1 month ago
sparkalpha|1 month ago
650REDHAIR|1 month ago
It’s made one in the past for me with some errors, but a framework I could work with.
It created an “interactive artifact” that wouldn’t work in the browser or their apps. Gaslit me for 3 revisions of me asking why it wasn’t working.
Created a text file that it wanted me to save as a .csv to import into excel that failed hilariously.
When I asked it to convert the csv to an excel file it apologized and told me it was ready. No file to download.
I asked where the file was and it apologized again and told me it couldn’t actually do spreadsheets and at that point I was out of paid credits for 4 more hours.
WesleyLivesay|1 month ago
Bringing that type of functionality to a wider audience and out of the CLI could be really cool!
catoc|1 month ago
insanebrain|1 month ago
sharyphil|1 month ago
cm2012|1 month ago