Why enterprises should not over rely on AI for cybersecurity (Includes interview)
Email
Password
Remember meForgot password?
    Log in with Twitter

article imageWhy enterprises should not over rely on AI for cybersecurity Special

Listen | Print
By Tim Sandle     Jul 8, 2020 in Business
AI is a strong tool for cybersecurity, but it is not the silver bullet. Many enterprises must not overly rely on it for their security, or they could be setting themselves up for problems. This is particularly so for the finance sector.
While artificial intelligence (AI), in the financial context, is effective at monitoring for atypical behaviors and hence with helping to find fraudsters, this behavioral assessment functionality can also lead to false positives being detected. This is in relation to AI sometimes erroneously flagging innocent financial activities as fraudulent transactions. As a result, this prevents some legitimate financial transactions from being progressed, causing frustrations to clients. Furthermore there is an impact on the financial institution as well for when each alert occurs this requires the organization to manually review each one. In the case of false positives, this can take considerable time, and there can be a risk of clients exiting if the problem is seen to recur. This in itself challenges one of the rationales for introducing AI, which is to save time. Some would argue that instead of being a time saving, the technology can actually increase the amount of time devoted to security assessments.
To gain insights into this matter of AI false positives, Digital Journal spoke with Mike Cutlip, who is the President and CEO of iti.
Digital Journal: How big is the cybersecurity threat?
Mike Cutlip: The threat is very large and growing. Cybercrime is impacting companies in every industry and consumers around the world. Our financial services clients are battling this risk all the time, especially in our space where identities are being intercepted and misused every day. According to 2019 research from Javelin, fraud impacts 14.4 million people in the United States. In its 2019 Internet Crime Report, the FBI stated that it received a total of 467,361 complaints of new cybercrimes, with reported losses exceeding $3.5 billion.
DJ: What form do most cybersecurity threats take?
Cutlip: Some of the mainstream threats are Phishing and Hacking leading to Identity Theft, Business Email Compromise (BEC), Ransomware, and more. With financial transactions, fraud is often perpetrated by cybercriminals attacking the payment process using BEC and other social engineering scams. With BEC, criminals submit phony payment instructions as well as amend and redirect legitimate payments. It’s getting worse every year. The FBI tracked 2019 U.S. losses from BEC alone at over $1.75B, up 35% from the year before, and that’s just what’s reported.
DJ: Is the threat bigger or smaller for mobile devices?
Cutlip: It’s definitely growing as well because mobile devices are now used as the launching point for more and more transactions. We have a recent study from Check Point, which found that cyberattacks targeting smartphones and other mobile devices rose by 50% over last year. However, mobile devices still benefit from being unique decentralized instances compared to the vast data stores that hackers target in centralized servers.
DJ: How can artificial intelligence help enterprises to protect themselves?
Cutlip: Artificial Intelligence is currently positioned at the forefront of fraud detection. That ability to monitor multiple data points, detect an event, and formulate a response is where AI-based platforms are focused. By now, we all have received a heads-up from our credit card provider that an AI system determined a charge was unusual for some reason. The reasons can be drawn from any number of data points such as user profiles of past behaviors, machine type, and geographies, as well as comparisons to other customer profiles.
DJ: What are some of the downsides to AI?
Cutlip: While AI-based systems are gobbling up data as they are trained how to perform those tasks better, a very high percentage of fraud monitoring alerts are still false positives. Chances are that the call from your credit card provider was questioning a legitimate transaction. That’s because the alternative – allowing actual fraudulent transactions to pass undetected – is a bigger concern. We received a study from the Mid-Size Bank Coalition of America (MBCA) which found that on average only 8.9% of monitoring alerts warranted investigation as a case, and only 2.8% remained suspicious after investigation. Regardless of whether the false positive rate is 97.2% or 50%, it’s clear that fraud monitoring creates new forms of business risk and customer friction.
DJ: How can AI be improved?
Cutlip: AI is an important part of any enterprise fraud detection program, but we need to be mindful that it could be on the leading edge of the hype cycle. It’s an important tool to fight crime but not a total solution. As bad actors develop their own AI capabilities, the battle simply escalates. Further, while the monitoring is performed with very sophisticated models, the investigation of alerts is often performed with legacy procedures prone to criminal activity themselves. So, an investigative phone call or text to the account owner’s number on record might be answered by the criminals who engineered the alerted transaction in the first place. Improving the security and customer experience around alert investigates is one opportunity.
DJ: Can you talk about your solution and how it can address some of the shortcomings of AI?
Cutlip: iti’s Permission Code® technology provides fraud prevention rather than detection. We eliminate fraud by allowing users to easily and indelibly link their digital ization to a specific transaction request, regardless of which channel the transaction is originated through. Users originate their secure transaction confirmation in the form of a Permission Code “smart-PIN,” which is embedded with both the user’s identity and details of the transaction.
For those transactions which are ized using Permission Code PINs, iti’s definitive security has eliminated the need for AI monitoring. However, for transactions which are not yet secured by our smart-PINs, AI-based monitoring will generate alerts that need investigation before the transaction can be processed. As noted above, many of the legacy security procedures in place today still leave the institution and customer subject to criminal activity, often in the form of man-in-the-middle attacks. iti’s automated technology efficiently performs the required customer outreach with strong security (man-in-the-middle risk is eliminated) and great customer experience.
DJ: Will future threats challenge AI solutions?
Cutlip: Given the relentless pressure from cybercriminals, security measures have evolved from simple passwords to multi-factor challenges and AI-based monitoring as the good guys try to stay a step ahead. As criminals tune their AI platforms, detective systems will continue to be fiercely tested. Moving back to offense, iti believes that strong prevention has a big role in beating cybercrime. Unfortunately, Multi-factor Authentication has not stopped fraud due to its over-focus on ‘who’ is on the other end of the line versus ‘what’ they are doing. Some authentication companies are now stretching their tools to associate transactions with identities, but bad actors are still able to separate and misuse/repurpose the identity to commit fraud. iti’s secure transaction signing technology is a strong preventive offense that eliminates the misuse of identity and allows AI to focus on other risks.
More about Cybersecurity, Artificial intelligence, Security
 
Latest News
Top News