ChatGPT Introduces New Safety Features to Fight Prompt Injection

OpenAI has introduced Lockdown Mode and elevated risk labels to protect ChatGPT from prompt injection, marking a new phase in enterprise-grade AI security.

AI platforms occasionally face new risks, such as third parties interfering with responses. OpenAI is acting fast to prevent these concerns.

Prompt injection is a new risk that AI platforms deal with. This occurs when a third party tries to mislead a conversational AI system. If these attempts work, the AI could give malicious responses or expose sensitive information.

However, AI companies aren’t letting this new obstacle deter their growth. OpenAI recently announced two new features meant to combat prompt injection: Lockdown Mode and “Elevated Risk” labels.

Lockdown Mode

Lockdown Mode is an optional, advanced security setting that restricts how ChatGPT interacts with external systems. It disables some of the platform’s tools and capabilities to reduce the risk of prompt injection threats.

One example is web browsing, which would be limited to cached content in Lockdown Mode. This prevents live network requests from leaving OpenAI’s network, offering added protection for the user’s data. Other features will be restricted completely in this mode.

Lockdown Mode is geared toward highly security-conscious users, including executives and security teams at well-known organizations. Not everyone will benefit from this security measure because many users rely on the tools that this mode restricts. ChatGPT business plans already have enterprise-grade data security, but this is an extra level of protection for those who need it.

Workspace Admins can customize Lockdown Mode experiences by choosing which apps and actions within each app can be used. Compliance API Logs Platform is a tool outside of Lockdown Mode that can help admins keep an eye on app uses, shared data, and connected sources.

“Elevated Risk” Labels

To help users make informed choices about AI risks, OpenAI is also introducing “elevated risk” labels. While the company is investing a lot in keeping ChatGPT safe and secure, there are new risks arising that haven’t been fully taken care of by the industry’s safety and security mitigations. The features exposed to these concerns will have “elevated risk” labels to provide transparency for users.

One example is coding assistant Codex’s network access. Selecting this feature lets Codex take actions on the web, such as locating certain documents. While this feature is beneficial for many users, an “elevated risk” label will pop up when you enable it. The pop up will include a clear explanation of what risks could be involved.

These labels don’t directly stop risks from occurring, but they give users the information they need about potential concerns. Then, using the information that’s presented to them, users can decide for themselves which tools to use. While it may be easy for some to avoid the high-risk features, others may decide that the risks are worth what the tools offer them.

AI Queries Remain Popular

Even though additional risks arise as AI grows, that hasn’t deterred people from using AI searches to find answers. Companies like OpenAI are working hard to combat new risks, ensuring AI platforms are as safe as possible for users.

If you’re not focusing on AI optimization when marketing your brand, you could be missing out on customers who use AI searches. Contact Avenue Z today to experience how we can enhance brand visibility.

We are the Agency for Influence

Discover new ways to drive revenue and build reputation for your brand.

,

More from Avenue Z

Recommended reads

Connect With Us

Stay in touch. Discuss your needs with us and see how we can help.