Feb 18
/
Latest News
Massive Security Breach at Codeway Exposes 300 Million Private AI Chat Logs
A massive security failure has put the private conversations of millions at risk after an unprotected database belonging to the "Chat & Ask AI" app was left accessible online, exposing roughly 300 million messages from more than 25 million users.
The app, owned by the Istanbul-based technology firm Codeway, acts as a "wrapper" or single gateway for users to interact with prominent AI models such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. Because the platform serves as a central hub for multiple systems, this single technical slip-up has resulted in a massive privacy impact on its global user base, which exceeds 50 million downloads across major app stores.
The breach was not the result of a sophisticated hack but was caused by a common Firebase misconfiguration. Firebase, a Google service used to manage app data, had its "Security Rules" mistakenly set to public, effectively allowing anyone to read or delete sensitive data without a password. An independent researcher known as "Harry" discovered the leak and noted that the exposed data included full chat histories and the custom names users assigned to their AI bots. Disturbingly, the logs contained deeply personal requests, ranging from discussions of illegal activities to requests for suicide assistance. Many users treat these bots as private journals, making the exposure of such "disturbing requests" a major ethical and privacy concern.
This incident follows a similar breach at OmniGPT, highlighting a recurring pattern where traditional application security failures intersect with highly personal AI data. Following the discovery, Harry built a tool that revealed the flaw is widespread; 103 out of 200 iOS apps he tested suffered from the same Firebase weakness. Although Codeway reportedly fixed the error within hours of being alerted on 20 January 2026, it remains unclear how long the data was vulnerable or if other parties copied it before the leak was plugged. To mitigate future risks, experts suggest avoiding real names, logging out of social media while using chatbots, and treating every AI conversation as if it could one day become public. James Wickett, CEO of DryRun Security, noted that while these backend misconfigurations are familiar, they become "far more dangerous" due to the extreme sensitivity of the personal data processed by modern AI products.
The app, owned by the Istanbul-based technology firm Codeway, acts as a "wrapper" or single gateway for users to interact with prominent AI models such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. Because the platform serves as a central hub for multiple systems, this single technical slip-up has resulted in a massive privacy impact on its global user base, which exceeds 50 million downloads across major app stores.
The breach was not the result of a sophisticated hack but was caused by a common Firebase misconfiguration. Firebase, a Google service used to manage app data, had its "Security Rules" mistakenly set to public, effectively allowing anyone to read or delete sensitive data without a password. An independent researcher known as "Harry" discovered the leak and noted that the exposed data included full chat histories and the custom names users assigned to their AI bots. Disturbingly, the logs contained deeply personal requests, ranging from discussions of illegal activities to requests for suicide assistance. Many users treat these bots as private journals, making the exposure of such "disturbing requests" a major ethical and privacy concern.
This incident follows a similar breach at OmniGPT, highlighting a recurring pattern where traditional application security failures intersect with highly personal AI data. Following the discovery, Harry built a tool that revealed the flaw is widespread; 103 out of 200 iOS apps he tested suffered from the same Firebase weakness. Although Codeway reportedly fixed the error within hours of being alerted on 20 January 2026, it remains unclear how long the data was vulnerable or if other parties copied it before the leak was plugged. To mitigate future risks, experts suggest avoiding real names, logging out of social media while using chatbots, and treating every AI conversation as if it could one day become public. James Wickett, CEO of DryRun Security, noted that while these backend misconfigurations are familiar, they become "far more dangerous" due to the extreme sensitivity of the personal data processed by modern AI products.
Executive IT Forums, Inc.
Educational Programs on Information Technology, Governance, Risk Management, & Compliance (GRC).
Our Newsletter
Get regular updates on CPE programs, news, and more.
Thank you!
Copyright © 2026 Executive IT Forums, Inc. All Rights Reserved.
Get started
Let us introduce our school
Write your awesome label here.