OpenAI is raising the security bar for ChatGPT accounts, unveiling a new “Advanced Account Security” program anchored by a partnership with security key maker Yubico. The initiative is aimed at protecting both high‑risk and everyday users as AI tools increasingly become a home for sensitive conversations, code, and business data.

OpenAI launches Advanced Account Security

OpenAI’s new Advanced Account Security offering is an opt‑in protection layer that sits on top of existing login measures for ChatGPT. It has been designed with high‑value targets in mind  including executives, journalists, public figures, and security teams but is being made available to any user who wants stronger safeguards against account takeover.

The program introduces additional verification steps at sign‑in, with a strong emphasis on phishing‑resistant, hardware‑based authentication. Rather than relying solely on passwords or one‑time codes, users can secure access to ChatGPT behind a physical key that must be present to complete login, dramatically reducing the chances of a successful compromise.

This latest move builds on OpenAI’s broader security posture, which has included features like Lockdown Mode and elevated risk indicators for suspicious activity. Together, these capabilities are meant to protect accounts as more organizations embed ChatGPT into critical workflows.

Yubico partnership brings custom YubiKeys to ChatGPT

The centerpiece of the announcement is a strategic partnership with Yubico, the company behind the YubiKey and a pioneer of modern, phishing‑resistant authentication standards. As part of the collaboration, OpenAI and Yubico are introducing custom, OpenAI‑branded YubiKeys that integrate directly with the Advanced Account Security program.

These keys function as hardware‑backed passkeys: users register the devices to their ChatGPT account and are then required to use them when signing in from new devices or in high‑risk situations. By design, they are far harder for attackers to bypass than traditional two‑factor methods, such as SMS or app‑based codes, which can be intercepted or phished.

OpenAI’s chief information security officer, Dane Stuckey, underscored why the company chose this path, saying security keys are “one of the best ways to protect accounts from phishing.” He explained that OpenAI already uses YubiKeys internally to protect its own staff and that the new program is about extending that same level of defense to customers. In his words, the company wants to make it easier for ChatGPT users to opt into “phishing‑resistant protection” when the stakes are high.

How hardware keys change the threat landscape

Security keys such as the YubiKey are small USB‑C, USB‑A, NFC, or Lightning devices that store cryptographic secrets and perform secure authentication when a user logs in. Instead of typing in a password and then a one‑time code, the user connects the key and confirms a touch to prove they are physically present.

Crucially, these keys are bound to specific sites and services. That means even if a user is tricked into clicking a link to a fake login page, a properly configured key will refuse to authenticate to the fraudulent site. This property makes hardware keys one of the most effective defenses available against phishing, an attack technique that remains a primary vector for account compromise across the internet.

For ChatGPT, the stakes are especially high. Many users now paste proprietary source code, deal terms, financial models, internal memos, and other sensitive material into AI chats. If such an account is hijacked, an attacker could gain access to a treasure trove of confidential information, as well as any linked services or single sign‑on integrations. OpenAI’s new hardware‑backed approach is intended to make that scenario far less likely.

Rising AI adoption drives stronger defenses

The timing of the rollout reflects how quickly AI tools have moved from experimentation to everyday infrastructure. ChatGPT, and other generative AI systems, are now embedded into developer workflows, research, customer support, marketing, and even internal decision‑making processes. As a result, security experts have warned that these accounts are becoming increasingly attractive targets for attackers.

Recent disclosures around AI‑related security research have highlighted that protecting the underlying models is only part of the picture. Equally important is ensuring that front‑end accounts where prompts, outputs, and integrations live are tightly locked down. Hardware keys, backed by modern authentication standards, are seen as one of the few tools that can reliably stand up to both basic credential theft and sophisticated phishing campaigns.

Against that backdrop, OpenAI’s decision to formally partner with an established security vendor and launch a dedicated advanced security tier sends a clear signal: AI accounts are now critical assets that warrant the same level of protection as corporate email, source‑code repositories, and cloud administration consoles.

What changes for ChatGPT users

For users, enrolling in Advanced Account Security will involve a few additional steps, but the experience is designed to remain straightforward. After opting in, users will register one or more YubiKeys to their ChatGPT account. From then on, accessing the account from new browsers, devices, or high‑risk sessions will require the physical key as part of the login process.

Organizations are expected to move quickly to enforce hardware‑backed authentication for their most sensitive use cases for example, for administrators who manage enterprise ChatGPT deployments, for developers working with confidential code, or for teams handling regulated data. Individual professionals, such as journalists and researchers, may also choose to adopt the keys to safeguard their own AI‑assisted work.

Because YubiKeys can be used across multiple services, the partnership also offers a broader convenience value: the same key used to protect a ChatGPT account can typically secure email, password managers, cloud dashboards, and other major platforms that support modern authentication standards. The trade‑off is the need for careful handling and backup planning, since losing a key without recovery options can lock a user out of all linked accounts.

A sign of where AI security is heading

With this announcement, OpenAI is positioning ChatGPT as one of the first mainstream AI platforms to integrate hardware‑backed, phishing‑resistant protection as a first‑class option. Rather than treating security keys as a niche feature for specialists, the company is bringing them to the forefront for any user who feels their AI‑powered work demands a higher level of assurance.

For security leaders, the message is that AI tools are now part of the same critical stack as email, storage, and source control and will need to be secured accordingly. For everyday users, the update is a sign that as AI systems grow more capable, the protections around them are finally starting to catch up.

Comments