Image Credits - thestatesman
Advertisement
Google Cloud Document AI, a service designed to streamline document processing through machine learning, has recently come under scrutiny due to significant security flaws. According to cybersecurity researchers at Vectra AI, a vulnerability in Document AI allowed unauthorized access to sensitive data stored in Google Cloud accounts and posed a risk of malware insertion.
The issue, identified and reported to Google by Vectra AI in early April, was related to the service’s batch processing feature. Document AI automates the extraction and analysis of documents, such as invoices and contracts, transforming unstructured data into structured information. During batch processing, the service uses a “service agent” with broad permissions rather than the caller’s specific permissions. This oversight created a security gap that could be exploited by malicious actors.
The flaw enabled attackers to access any Google Cloud Storage buckets within the same project, potentially exposing all data stored there. Researchers demonstrated a Proof of Concept to Google, illustrating how the vulnerability could be used to exfiltrate, modify, and reintegrate a .PDF file.
Google initially addressed the issue with a patch but faced criticism for not fully resolving the problem. Following further pressure from researchers, Google confirmed in early September that a more effective fix was applied. This update included downgrading permissions to ensure that attackers would need access to a victim’s project to exploit the vulnerability.
The incident highlights the ongoing challenges in securing cloud-based services and emphasizes the need for robust security measures in AI-driven tools. While Google has taken steps to mitigate the flaw, the situation underscores the importance of vigilance and prompt action in addressing cybersecurity threats.