'Please halt this activity': Not-so-open OpenAI seems to have gone full mob boss, sending threatening emails to anyone who asks its latest AI models probing questions

In a seeming rendition of the classic pre-execution “you ask too much” trope, OpenAI has revealed itself as being—shocker—not so open after all. The AI chatbot company seems to have started sending threatening emails to users who ask the company’s latest codename “Strawberry” models questions that are a little too probing.

Some have reported (via Ars Technica) that using certain phrases or questions when speaking to o1-preview or o1-mini results in an email warning that states, “Please halt this activity and ensure you are using ChatGPT in accordance with our Terms of Use and our Usage Policies. Additional violations of this policy may result in loss of access to GPT-4o with Reasoning.” 

No comments:

Post a Comment