OpenAI have a good business model here, though possibly a bit unethical.
Shopify (who recently laid me off but I still speak highly of) locked down the public access to ChatGPT's website. But you could use Shopify's internal tool (built using https://github.com/mckaywrigley/chatbot-ui) to access the APIs, with access to GPT4. And it was great!
So look at this from OpenAI's perspective. They could put up a big banner saying "Hey everyone, we use everything you tell ChatGPT to train it to be smarter. Please don't tell it anything confidential!". And then also say "By the way, we have private API access that doesn't use anything you say as training inputs- maybe your company would prefer that?"
The louder they shout those two things, the more businesses will line up to pay them.
And the reason they can do this: they've built a brilliant product that everyone wants to use, everyone is going to use.
Related: OpenAI announced (kinda hidden in a recent blog post) that they are working on a ChatGPT Business subscription so businesses can get this without writing their own UI. I expect it to be popular.
> We are also working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users. ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default.
It's interesting you mention Shopify and how they use chatgpt. Yesterday the founder and CEO of Shopify, Tobias Lütke, sat down for an interview with Sam Altman.
OpenAI ain’t desperate for cash. It is a pure strategy position. If they wanted another 10bil they’d have to hire a bouncer for the queue. It is like the idealized wet dream startup.
How can you tell if a third party is actually doing what they are claiming they are doing? You can't. You can only observe their overt and visible behavior, not their covert behavior.
Agree. It’s incredibly ambiguous and difficult and spread out across the site exactly how privacy addressed across their product offering.
Trying to explain to management what’s ok and what isn’t and what are the risks - in this space - is quite a challenge without clear commitments and documentation.
The trouble is, I don’t trust their “provate API access” to not mine the data surreptitiously any more than I trusted Facebook to not exfiltrate my video and audio and my messages and…
That'd fit in nicely with the other two things they shout: "AI IS EXTREMELY DANGEROUS AND THREATENS TO KILL US ALL." "But sign up here, guy, your $22/mo. means this AI is now safe and contained :)"
ChatGPT is disallowed by default at pretty much every large company, just like every other external service that isn't explicitly approved and hasn't signed a contract. Apple employees aren't allowed to use their personal Dropbox account for storing corporate documents either, for example.
All such articles you see are just security teams clarifying the existing policy – you weren't allowed to use it before and you are not allowed to use it now. It's only noteworthy because it has ChatGPT in the title.
I work at a large publicly traded company and our org was told to treat it as if it were Stack Overflow. It's okay to use, but assume everything you give it will be seen by people who do not work for us and everything it gives you is potentially dangerous or incorrect.
For a company of Apple's size, banning ChatGPT entirely is probably the only effective way of preventing people from training ChatGPT on their internal data.
PSA: My own employer (not Apple) restricts the same, and is pushing employees to use an internal Azure GPT3.5 deployment instead.
Unlike OpenAI, we do not disclose to employees that all prompts are logged and forwarded to both the security and analytics teams. Everything is being logged and replicated in plaintext with no oversight.
So be careful about code snippets with embedded creds or asking EmployerGPT really stupid questions about how to do your job. The priests are recording your confessions so you never know how or if they'll get used against you later.
I feel like users need to be better educated and held accountable.
Would you post proprietary data on Stackoverflow? No. You would formulate a generic question with any IP removed. That’s how we should use public ChatGPT.
So I think there’s an argument for a monitored portal of ChatGPT usage, where you are audited and can get in trouble. Heck even an LLM system itself can help identify proprietary data! Then use that to educated people and hold them accountable.
I'm a bit surprised by the comments here. Looks like people really find LLM useful for their day to day work. I'm surprised because I (maybe natively) thought the level of hallucinations in these tool was too prohibitive to get real value.
I personally don't mind that Apple bans ChatGPT. The interesting stuff in this news to me is how many people seems to get real value from it. To the point where company invest into getting private instances/versions.
How do you uses these LLM, for what kind of task?
Do you feel AI enhanced?
The innovation with generative AI is fundamentally legal: they're copyright laundering systems. You put a few images (books, etc.) in, mix it around, and "poof" there disappears the copyright.
I think this largely accounts for when it will be useful. (And why companies will and will not use it).
If you asked any of the CEOs from those companies banning ChatGPT we're in for a glorious AI-shaped future, until it comes to them. It doesn't really inspire any confidence.
Can you imagine if Marlboro or Philip Morris forbade their employees from smoking for health reasons?
A lot of those companies have already set up their own instance of OpenAI's models running on Azure. I'm not involved with this directly, but it is my understanding that Microsoft are selling (and pricing) this aggressively.
Perhaps it is more like Marlboro forcing its employees to smoke Marlboros - if those Marlboros were secret, non-public cigarettes only available on Marlboro premises. OK so maybe this metaphor is a little thin.
My company is huge 100k and lucky enough they also see it a s critical and make it centrally available to us soon.
But this will be a problem for big companies: small ones normally care less about these types of things. This means big companies have to do something otherwise they will compete with ml enhanced developers.
Use ChatGPT in a domain you're a relative expert in and you run into a million scenarios where it offers a "solution" that will do something close to what was described, but not quite - and you might even not immediately notice the problem as a domain expert. Even worse it may produce side effects suggestive that it is working as desired, when it's not.
In the not-so-secret world of Stack Exchange coffee pasta, people would have other skilled humans pointing these issues out. In the world of LLMs, you risk introducing ever more code that looks perfectly correct, but isn't. What happens at scale?
The net change in efficiency of LLMs will be quite interesting to see. Because unlike past technologies where there was only user error, we're dealing here with going to a calculator that will not infrequently give you an answer that's wrong, but looks right. And what sort of 'equilibrium' people will settle into with this, is still an open question.
> This means big companies have to do something otherwise they will compete with ml enhanced developers.
It's not that bad. In reality if we're looking at large tech companies, they've got senior people who know pretty much anything you want available within minutes/hours - which is something small companies just can't afford.
Ml enhanced devs may be a little bit faster and get some usually-correct help, but they won't get any wisdom.
My company just got its own instance which prevents their data from being exposed outside of that container, so it won’t be used for (public) training. Surely companies like Apple can do the same.
I wouldn't worry about that. Our software contains so much natural stupidity that artificial intelligence, even if it existed, wouldn't have a chance in hell.
A lot of companies, big and small, have been following this strategy. It makes a lot of sense to me. Companies should never blindly jump into the bandwagon of every novelty
I'm probably the slow guy but how does this work? If I obtain any data I'm allowed to do anything I want with it? Law doesn't seem to work like that? Q: Where did you find the data? A: Someone uploaded it! People upload things to [say] the piratebay or [say] youtube all the time.
Could we then look at this type of automation as a kind of cryptographic data store where no one knows what is inside which instance?
The whole process of teaching it to keep things secret from the humans seems like a terrific idea. It only prevents people from checking if it knows something. It will just happily continue using it as long as possible deniability is satisfied.
If one can't provide a copy of the data about an EU citizen (and everything derived from it) the EU citizen should be entitled to receive the whole thing? And request it to be deleted?
Say I steal everyone's chat log, does rolling a giant ball of data from it absolve my sins? If I allow others to pay not to make their upload public does that grant me absolution? The events don't seem remotely related.
This is going to be the new cryptocurrency bubble. People are going to give speeches about the revolutionary new system while the room fills with sinister motives for personal gain until the toxicity is dense enough to crush any positive effort.
How long till there's a self-hosted CoPilot based on llama or one of the other plethora of open source models? That will be a sizable market I predict.
I've been following this stuff pretty closely and so far absolutely nothing out there in the open model world is even remotely close to GPT4 for coding. Gpt3.5 maybe, but I honestly find GPT3.5 almost useless.
Apple has the financial and technical depth to create their own private GPT.
Business opportunity here is for anyone who can figure out how to do privacy preserving LLMs (without the need to trust the service provider) effectively (in terms of performance and cost).
All of the "ChatGPT banned" discussions seem to fail to differential ChatGPT (the D2C interface) from the B2B products. From a security and compliance perspective, they're essentially two completely different risk profiles.
* ChatGPT is the equivalent of putting your info on PasteBin or some other public sharing site. Anything you send there, you should not expect to remain private.
* The B2B side has much better controls and limits to minimize risk.
Why does OpenAI not address this concern directly? I’m sure it’s better for both Apple and OpenAI if there was a business agreement to not use their data.
OpenAI does address this: paying for API access [0] means that OpenAI does not use your data for training any models [1], while using ChatGPT directly at chat.openai.com does share your data for training by default. However, you can turn this off in settings[2], but companies would probably not be comfortable with this option.
> Do you store the data that is passed into the API?
> As of March 1st, 2023, we retain your API data for 30 days but no longer use your data sent via the API to improve our models. Learn more in our data usage policy.
They are working on ChatGPT Business.
"for professionals who need more control over their data as well as enterprises seeking to manage their end users.”
[+] [-] layer8|2 years ago|reply
This doesn’t mention an outright ban, just that ChatGPT use has been restricted (whatever that means).
[+] [-] mabbo|2 years ago|reply
Shopify (who recently laid me off but I still speak highly of) locked down the public access to ChatGPT's website. But you could use Shopify's internal tool (built using https://github.com/mckaywrigley/chatbot-ui) to access the APIs, with access to GPT4. And it was great!
So look at this from OpenAI's perspective. They could put up a big banner saying "Hey everyone, we use everything you tell ChatGPT to train it to be smarter. Please don't tell it anything confidential!". And then also say "By the way, we have private API access that doesn't use anything you say as training inputs- maybe your company would prefer that?"
The louder they shout those two things, the more businesses will line up to pay them.
And the reason they can do this: they've built a brilliant product that everyone wants to use, everyone is going to use.
[+] [-] electroly|2 years ago|reply
https://openai.com/blog/new-ways-to-manage-your-data-in-chat...
> We are also working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users. ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default.
[+] [-] m348e912|2 years ago|reply
https://www.youtube.com/watch?v=uRIWgbvouEw
[+] [-] quickthrower2|2 years ago|reply
[+] [-] Dwedit|2 years ago|reply
[+] [-] deanc|2 years ago|reply
Trying to explain to management what’s ok and what isn’t and what are the risks - in this space - is quite a challenge without clear commitments and documentation.
[+] [-] EGreg|2 years ago|reply
[+] [-] nmlrc|2 years ago|reply
[+] [-] lucraft|2 years ago|reply
[+] [-] behnamoh|2 years ago|reply
[+] [-] IAmNotACellist|2 years ago|reply
[+] [-] paxys|2 years ago|reply
All such articles you see are just security teams clarifying the existing policy – you weren't allowed to use it before and you are not allowed to use it now. It's only noteworthy because it has ChatGPT in the title.
[+] [-] superfrank|2 years ago|reply
[+] [-] thiht|2 years ago|reply
I’d guess it’s because of journalists who have no idea about industry standards?
[+] [-] ziml77|2 years ago|reply
[+] [-] jstarfish|2 years ago|reply
Unlike OpenAI, we do not disclose to employees that all prompts are logged and forwarded to both the security and analytics teams. Everything is being logged and replicated in plaintext with no oversight.
So be careful about code snippets with embedded creds or asking EmployerGPT really stupid questions about how to do your job. The priests are recording your confessions so you never know how or if they'll get used against you later.
[+] [-] uguuo_o|2 years ago|reply
[+] [-] softwaredoug|2 years ago|reply
Would you post proprietary data on Stackoverflow? No. You would formulate a generic question with any IP removed. That’s how we should use public ChatGPT.
So I think there’s an argument for a monitored portal of ChatGPT usage, where you are audited and can get in trouble. Heck even an LLM system itself can help identify proprietary data! Then use that to educated people and hold them accountable.
[+] [-] jspdown|2 years ago|reply
I personally don't mind that Apple bans ChatGPT. The interesting stuff in this news to me is how many people seems to get real value from it. To the point where company invest into getting private instances/versions.
How do you uses these LLM, for what kind of task? Do you feel AI enhanced?
[+] [-] mjburgess|2 years ago|reply
I think this largely accounts for when it will be useful. (And why companies will and will not use it).
[+] [-] easyThrowaway|2 years ago|reply
Can you imagine if Marlboro or Philip Morris forbade their employees from smoking for health reasons?
[+] [-] sebzim4500|2 years ago|reply
[+] [-] maxfurman|2 years ago|reply
[+] [-] GaggiX|2 years ago|reply
[+] [-] digging|2 years ago|reply
Or if Steve Jobs disallowed his children from using iPads?
[+] [-] croes|2 years ago|reply
[+] [-] Qweiuu|2 years ago|reply
But this will be a problem for big companies: small ones normally care less about these types of things. This means big companies have to do something otherwise they will compete with ml enhanced developers.
[+] [-] blibble|2 years ago|reply
I'm sure they're terrified of competing with the legions of boilerplate generators
[+] [-] somenameforme|2 years ago|reply
Use ChatGPT in a domain you're a relative expert in and you run into a million scenarios where it offers a "solution" that will do something close to what was described, but not quite - and you might even not immediately notice the problem as a domain expert. Even worse it may produce side effects suggestive that it is working as desired, when it's not.
In the not-so-secret world of Stack Exchange coffee pasta, people would have other skilled humans pointing these issues out. In the world of LLMs, you risk introducing ever more code that looks perfectly correct, but isn't. What happens at scale?
The net change in efficiency of LLMs will be quite interesting to see. Because unlike past technologies where there was only user error, we're dealing here with going to a calculator that will not infrequently give you an answer that's wrong, but looks right. And what sort of 'equilibrium' people will settle into with this, is still an open question.
[+] [-] viraptor|2 years ago|reply
It's not that bad. In reality if we're looking at large tech companies, they've got senior people who know pretty much anything you want available within minutes/hours - which is something small companies just can't afford.
Ml enhanced devs may be a little bit faster and get some usually-correct help, but they won't get any wisdom.
[+] [-] graftak|2 years ago|reply
[+] [-] classified|2 years ago|reply
[+] [-] gumballindie|2 years ago|reply
Why is it critical to use a boilerplate code generator?
[+] [-] worewood|2 years ago|reply
[+] [-] ricardo_nh|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] 6510|2 years ago|reply
Could we then look at this type of automation as a kind of cryptographic data store where no one knows what is inside which instance?
The whole process of teaching it to keep things secret from the humans seems like a terrific idea. It only prevents people from checking if it knows something. It will just happily continue using it as long as possible deniability is satisfied.
If one can't provide a copy of the data about an EU citizen (and everything derived from it) the EU citizen should be entitled to receive the whole thing? And request it to be deleted?
Say I steal everyone's chat log, does rolling a giant ball of data from it absolve my sins? If I allow others to pay not to make their upload public does that grant me absolution? The events don't seem remotely related.
This is going to be the new cryptocurrency bubble. People are going to give speeches about the revolutionary new system while the room fills with sinister motives for personal gain until the toxicity is dense enough to crush any positive effort.
[+] [-] diversionfactor|2 years ago|reply
[+] [-] baryphonic|2 years ago|reply
I'd jump into it if I had time/resources
[+] [-] nathan_compton|2 years ago|reply
[+] [-] whimsicalism|2 years ago|reply
[+] [-] eternalban|2 years ago|reply
Business opportunity here is for anyone who can figure out how to do privacy preserving LLMs (without the need to trust the service provider) effectively (in terms of performance and cost).
I would keep my eye on this space:
https://medium.com/optalysys/fhe-and-machine-learning-a-stud...
[+] [-] SkyPuncher|2 years ago|reply
* ChatGPT is the equivalent of putting your info on PasteBin or some other public sharing site. Anything you send there, you should not expect to remain private.
* The B2B side has much better controls and limits to minimize risk.
[+] [-] dilippkumar|2 years ago|reply
[+] [-] zubairshaik|2 years ago|reply
[0] https://openai.com/pricing
[1] https://platform.openai.com/docs/guides/chat/faq
> Do you store the data that is passed into the API?
> As of March 1st, 2023, we retain your API data for 30 days but no longer use your data sent via the API to improve our models. Learn more in our data usage policy.
Note: While it says "Learn more in our data usage policy" I found no mention of data retention at all in the usage policy at https://openai.com/policies/usage-policies
[2] https://help.openai.com/en/articles/7792795-how-do-i-turn-of...
[+] [-] laratied|2 years ago|reply
[+] [-] gumballindie|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]