| | | Effective PR

Google's AI isn't actually intelligent: its error shows it facilitates bank targeted fraud

Publication: 
Editorial Staff
chiefofficersnet

Like so many developers of so-called artificial intelligence, Google is proud of its efforts and, yet, once again AI has proved that it is often a failure waiting to happen. But this example is ever funnier - and more worrying. Someone has manipulated it and the target, amazingly, is people working in e.g. banks. Should banks, etc. now ban Google Alerts?

This is a screenshot of an extract from today's Google Alert for money laundering. If you received this e-mail and are silly enough to allow the display of mail in HTML format, you will not have seen, before clicking on the links, that they divert from Google's analytics to websites that, obviously, have nothing to do with money laundering. One links to a page at "gopublicschoolsoakland.com: that, if you extract the page reference first does nothing and then diverts to online shopping service Lazada. The second is at pasc-calgary.org that then autodiverts to a site that is blocked by Firefox as a potential threat. The last goes to elbaitshop.com and then diverts to something else.

What has happened, it appears, is that someone has hacked all the primary sites and inserted text that GoogleBot finds and reports as relating to subject matter in respect of which alerts are set. In the case of money laundering, then, this is going to be those in compliance and risk management in financial institutions.

For now, it appears as if the trick has been to set up links to advertising, presumably in an attempt to secure commissions for links: after all, getting Google to send out your affiliate links to its hundreds of millions of users instead of sending out your own mail is a pretty nifty trick.

But Google already fails, on an epic scale, to identify those websites which promote or divert to illegal content. The problem is well known. However, this is the first time we have seen the manipulation of Googlebot on this scale.

Google wants you to trust it to allow cars to move around without the risk-management skills of human driver. But on this example, it can't even drive traffic away from fake and potentially criminal websites.

Genius? Hardly.

It's important to realise that it's not Google's servers that have been hacked: the issue is with Google's failure to identify suspicious data. Worse, instead of identifying it and blocking it, Google actually disseminates fraudulent data.

This could be fake news, or a location that hides malware or a phishing site or sites promoting illegal or unsavoury content. What lies beneath the link after link is not the point. What is important is that Google's trumpeted AI does not work as required.

We see banks pilloried for failures in their suspicious activity monitoring systems, both technological and human, but one of the world's largest data processors, and richest companies, simply amplifies risk instead of taking steps to contain it where, ironically, the steps it should take are not hugely dissimilar to those that banks, etc. are required to take in relation to suspicious transactions.

It's not acceptable.

Worse, because of the subject matter those most likely to click are in the financial sector including banks and the risk of drive-by downloads as this tactic spreads is immense.

Arguably, financial institutions should now consider banning Google Alerts unless and until Google can demonstrate that it has effective risk management controls in place.

hahagotcha