Since its inception, AI has experienced at least two major hype cycles with resulting winters of disillusionment. Although after the first “winter”, many financial firms deployed a number of successful applications, by the 1990’s, AI went into its second winter of disillusionment as realization set in that these systems were harder and more costly to build and maintain than first anticipated. AI appears to be entering a new phase where interest is surging again. An example of this is the sharp increase in the commercial use of AI, also known as machine intelligence, such as IBM’s Watson. As another indicator, the vast majority of respondents to the 2014 Future of the Internet study anticipate that robotics and machine intelligence will permeate wide segments of daily life by 2025 with huge implications for a range of industries. Will the latest surge of AI applications in the financial services fall short again or will they this time truly transform the financial services industry?
Users are increasingly exposed to customized context-sensitive information and advice derived by systems that collect and analyze users’ past actions, often with the users not aware of this happening. The implications for the financial sector is that by tracking users’ habits, activities, and behavioral characteristics, financial data and products can be personalized to meet and anticipate each user’s unique and changing needs. This makes it practical for each user to have his/her own digital personal financial assistant in the following venues.
Personalized Financial Services
Because of the increased customized automation, the financial institution can offer more personalized services in near real-time at lower costs. We already are starting to see a number of successful new applications that provide hints as to where the industry may be heading. Consider the following examples of applications that are being developed and deployed:
- Automated financial advisors and planners that assist users in making financial decisions. These include monitoring events and stock and bond price trends against the user’s financial goals and personal portfolio and making recommendations regarding stocks and bonds to buy or sell. These systems are often called “robo advisors” and are increasingly being offered both by start-ups and established financial service providers.
- Digital and wealth management advisory services offered to lower net worth market segments, resulting in lower fee-based commissions
- Smart wallets that monitor and learn users’ habits and needs and alert and coach users, when appropriate, to show restraint and to alter their personal finance spending and saving behaviors (e.g., Wallet.AI).
- Insurance underwriting AI systems that automate the underwriting process and utilize more granular information to make better decisions
- Data-driven AI applications to make better informed lending decisions
- Applications, embedded in end-user devices, personal robots, and financial institution servers that are capable of analyzing massive volumes of information, providing customized financial advice, calculations and forecasts. These applications also can develop financial plans and strategies, and track their progress. This includes research regarding various customized investment opportunities, loans, rates and fees
- Automated agents that assist the user, over the Internet, in determining insurance needs
- Trusted financial social networks that allow the user to find other users who are willing to pool their money to make loans to each other, and to share in investments.
New Management Decision-making
Data-driven management decisions at lower cost could lead to a new style of management, where future banking and insurance leaders will ask the right questions to machines, rather than to human experts, which will analyze the data to come up with the recommended decisions that leaders and their subordinates will use and motivate their workforce to execute.21
Reducing Fraud and Fighting Crime
AI tools which learn and monitor users’ behavioral patterns to identify anomalies and warning signs of fraud attempts and occurrences, along with collection of evidence necessary for conviction are also becoming more commonplace in fighting crime.
As businesses begin to rely more on data-driven AI applications, these new applications lead to new business issues, security, and privacy concerns, including:
- How will they differentiate themselves?
- How does a user distinguish one automated on-line banking application from another?
- How can one benchmark and rank the quality of the recommendations?
- Which financial institution and application will the user trust to provide access to his/her financial details across financial institutions?
- Will more comprehensive access to data across institutions result in better advice?
- How can this be demonstrated?
- Is the speed of execution, the ability to act and provide information in real or near-real time, more important, or equal in importance to the recommendations?
- Given that AI systems can also explain their recommendations. How important is the ability to explain the recommendations in a convincing and understandable manner?
- How easy will the system be to use?
Most likely all of the above will be qualities that will determine which financial institutions’ products and services will prevail in the marketplace.
Security and Privacy Concerns
- When things fail, or AI applications are attacked and access denied, or resulting recommendations tampered with, the consequences could be devastating.
- If applications get compromised or tampered with, the user will get poor or false advice.
- If the user can’t identify an application as genuine and valid with a high level of assurance, the user could be handing personal information and goals over to the wrong applications or act on malicious or bad advice.
- Is there an equivalent for a “Series 7” certification for a robot advisor, and who is liable when providing inappropriate advice?
- Likewise, if the applications can’t identify their users with high enough assurance, criminals could successfully impersonate the real user and convince the program to turn over sensitive data, or to take instructions from the wrong person.
- It could result, among other things, in lost funds, reduced eligibility for loans and insurance and destroyed reputations.
- How do we assess and audit the financial institutions and third parties that develop and run these applications?
Another concern for financial institutions is how regulators will respond and supplement guidance on use of AI. Federal financial regulators have issued extensive supervisory guidance on use of information technology generally and security, privacy, vendor management and resiliency specifically which require financial institutions to assess the risk and develop adequate controls. As the number of AI applications increases, regulators are likely to focus more on the use of AI and to identify deficiencies in controls.
Because of the significant potential benefits there is probably no turning back, there will be increasing automation of financial services, often employing AI technology. However, these new AI applications introduce a number of business, security and privacy issues which will have to be addressed if they are to succeed in the marketplace. It will be important to ensure that these intelligent applications are developed in in a way that they will provide the desired benefit and that the user can trust the advice and services provided. It will be important to be able to detect and isolate infected or malicious AI programs immediately, and develop the effective policy and laws for governing their development and use, so that personal information is safeguarded and not misused. This includes technology and policy with respect to what constitutes liability, how to best audit these systems, and how to design and control AI systems for human safety.