Last month, the Tea app suffered a data breach, in which images and other private data belonging to women that had signed up for the service were stolen. It's an app that was set up with the best of intentions...but it illustrates just how important it can be to consider the security aspects around data that a service retains, or whether certain data should be retained at all.
An App for Women
Tea was founded in November 2022 by Scott Cook, a software engineer whose mother used popular online dating sites. The app's premise is to allow women to share dating experiences they've had with men, highlighting "red flags" and "green flags" so that other women walk into dates with those men forewarned, or perhaps forgo dating those men if they have a history of mistreating women. Tea briefly became the most downloaded app on Apple's App Store, passing even ChatGPT. (As of August 18, 2025, it's still #3 on Apple's "Top Free Apps" list, behind only ChatGPT and Meta's Threads.)
The asymmetric nature of dating is well-established; if a date doesn't go well, men are afraid they'll get laughed at, while women are afraid they'll get killed. Since men aren't interested in protecting women, women must protect themselves. We all know about the trick of Googling your date's name to see what comes up about him, right? This is just a logical extension of that.
I suppose, if you're leaning towards the "4B" persuasion, it might ultimately lead you to believe that there are no men that are good to date, and reinforce an attitude of female separatism. But, while more women are going that route all the time, others still have an interest in dating and being safe.
Data Left Behind
Unfortunately, the app's makers had it storing certain items of personal data in a publicly-accessible Firebase bucket. The data included selfies and pictures of women holding up their photo IDs, images supposedly required for verification but that were not supposed to be saved. Another leak cropped up later, this one containing private messages between members, many of those on sensitive topics.
The Firebase bucket was misconfigured, and was accessible without prior authentication. A user on the 4chan imageboard discovered the vulnerability, and started sharing the link to download the data. Tea responded by taking systems offline and bringing in cybersecurity experts to fully vet its systems; they also offered free identity protection services to users that were affected.
A Bright Red Target
Now, the app's developers should have known that, as soon as the existence of an app like this was revealed, there would be men who would mount technical attacks against it. This is particularly true of the male denizens of a site like 4chan, some of whose talents have been described as "weaponized autism." (This term is often associated with "Red Pill" communities, of which 4chan is one.) Any data stored by such an application needs to be:
- The absolute minimum data required to perform its functions.
- Locked down by security measures reminiscent of the ones protecting Pine Gap. (I saw the Netflix series Pine Gap recently, so that's the example that came to mind.)
In particular, using the public cloud is probably a bad idea. I would design such a service by starting with a private server (either a colocated box or a private instance on something like Digital Ocean), with data encrypted both in transit and at rest, using quality algorithms and good key management. Data like the initial ID photos, which Tea said weren't being archived, should not have been archived at all.
Firebase seems to have a history of problems like this, such as a remote code execution vulnerability in the arc browser. One commenter has noted: "Using Firebase in 2013 to build an RSVP system for my wedding website was a good idea. Using it in 2025 for literally ANY reason, up to and including storing pictures of users IDs, is a bad bad bad idea." Whoever wrote the app probably disabled the permissions on the Firebase bucket because they needed to get an MVP out, and didn't want to waste time learning how the permissions work.
Vibe Coding?
Many people have speculated that Tea's problems were due to the app being "vibe coded." At least one expert, Jan Kammerath, has disassembled the app's source code (for the Android version) and found some oddities. The app was apparently written using the Flutter framework, in Dart, which can be explained by the need to make it cross-platform to iOS. The app's resources contained critical configuration information like API keys and the name of the cloud storage bucket, where they're easily visible to anyone who knows how to look. Kammerath concludes that the app was developed by an inexperienced developer, possibly with a small team reporting to them, and "The app was likely not vibe coded as none of the models of the past months would’ve made such obvious mistakes."
That's not to say that vibe coding is completely safe. Developers really need to at least be aware of the code their AI assistants are submitting in their name, and have some idea of how it's doing. After all, it takes a smart dog to hunt birds, but it takes a hunter behind him to keep him from chasing after rabbits--and the hunter needs to know more than the dog.
Tea or No Tea?
Some individuals have questioned whether the Tea app even should have been written in its current form, especially by its current set of founders, which are all men. One commenter has compared it to the Flo app, a period-tracking app which shared women's data with third parties without consent.
Men, of course, would probably argue against it. As this article details, they even created their own app, "Teaborn," for men to vet women. It lasted 24 hours before it was taken down, when men started posting nudes of women. Sad, and yet, utterly predictable.
I'd argue that the app is useful, and in some cases, vitally necessary. But security should have been on everyone's minds when creating it.