Google Unveils India-Centered Security Constitution, Shares How It Is Utilizing AI to Fight On-line Frauds and Scams

Google unveiled its Security Constitution for India, highlighting how it’s utilizing synthetic intelligence (AI) know-how to establish and stop cases of cybercrimes throughout its merchandise. The Mountain View-based tech big highlighted that with the rise of India’s digital economic system, the necessity for trust-based techniques was excessive. The corporate is now utilizing AI in its merchandise, country-wide programmes, and to detect and take away vulnerabilities in enterprise software program. Alongside, Google additionally highlighted the necessity to construct AI responsibly.
Google’s Security Constitution for India Highlights Key Milestones
In a weblog publish, the tech big detailed its achievements in profitable identification and prevention of on-line fraud and scams throughout its shopper merchandise, in addition to enterprise software program. Explaining the concentrate on cybersecurity, Google cited a report highlighting that UPI associated frauds price Indian customers greater than Rs. 1,087 crore in 2024, and the entire monetary losses from unchecked cybercrimes reportedly reached Rs. 20,000 crore in 2025.
Google additionally talked about that unhealthy actors are quickly adopting AI to reinforce cybercrime methods. A few of these embody AI-generated content material, deepfakes, and voice cloning to tug off convincing frauds and scams.
The corporate is combining its insurance policies and suite of safety applied sciences with India’s DigiKavach programme to higher defend the nation’s digital panorama. Google has additionally partnered with the Indian Cyber Crime Coordination Centre (14C) to “strengthen its efforts in direction of consumer consciousness on cybercrimes, over the following couple of months in a phased method.”
Coming to the corporate’s achievements on this house, the tech big mentioned it eliminated 247 million adverts and suspended 2.9 million fraudulent accounts that have been violating its insurance policies, which additionally contains complying with the state and country-specific laws.
In Google Search, the corporate claimed to be utilizing AI fashions to catch 20 occasions extra scammy internet pages earlier than they seem on the outcomes web page. The platform can also be mentioned to have decreased cases of fraudulent web sites impersonating customer support and governments by greater than 80 % and 70 %, respectively.
Google Message lately adopted the brand new AI-powered Rip-off Detection characteristic. The corporate claims the safety software is flagging greater than 500 million suspicious messages each month. The characteristic additionally warns customers once they open URLs despatched by senders whose contact particulars aren’t saved. The warning message is claimed to have been proven greater than 2.5 billion occasions.
The corporate’s app market for Android, Google Play, is claimed to have blocked almost six crore makes an attempt to put in high-risk apps. This included greater than 220,000 distinctive apps that have been being put in on greater than 13 million units. Its UPI app, Google Pay, additionally displayed 41 million warnings after the system detected the transactions being made have been potential scams.
Google can also be working in direction of securing its enterprise-focused merchandise from potential cybersecurity threats. The corporate initiated Mission Zero in collaboration with DeepMind to find beforehand unknown vulnerabilities in widespread enterprise software program corresponding to SQLite. Within the SQLite vulnerability, the corporate used an AI agent to detect the flaw.
The corporate can also be collaborating with IIT Madras to analysis Submit-Quantum Cryptography (PQC). It refers to cryptographic algorithms which can be designed to safe techniques from potential threats brought on by quantum computer systems. These algorithms are used for encryption, digital signatures, and key exchanges.
Lastly, on the accountable AI entrance, Google claimed that its fashions and infrastructure are completely examined towards adversarial assaults by way of each inside techniques in addition to AI-assisted purple teaming efforts.
For accuracy and labeling AI-generated content material, the tech big is utilizing SynthID to embed an invisible watermark on textual content, audio, video, and pictures generated by its fashions. Google additionally requires its YouTube content material creators to reveal AI-generated content material. Moreover, the double-check characteristic in Gemini permits customers to make the chatbot establish any inaccuracies by working a Google Search.