Staff at JPMorgan Chase Will Soon Have Access to an AI Assistant Powered by OpenAI

JPMorgan Chase, the largest U.S. bank by assets, has developed a platform called LLM Suite that enables users to access large language models (LLMs), according to a report by CNBC. The software initially launched with a model from OpenAI, the company behind the widely known AI chatbot, ChatGPT.

Currently, more than 60,000 JPMorgan employees, roughly one-fifth of the bank’s total workforce, have access to the LLM Suite. This move solidifies JPMorgan’s position as an early leader in AI adoption within the banking sector. The bank has been investing in AI for several years, having hired its head of AI research back in 2018, and has since developed over 400 AI use cases across various functions within the organization.

Jamie Dimon, JPMorgan’s CEO, is fully committed to advancing AI integration throughout the bank. In his annual letter to shareholders in May, Dimon compared the transformative potential of AI to that of the printing press and the steam engine. He emphasized that AI has the potential to enhance nearly every job at the bank and significantly impact the composition of its workforce.

This focus on AI isn’t limited to the bank’s engineers and data scientists. Mary Erdoes, who heads JPMorgan’s asset and wealth management division, revealed during the bank’s investor day in May that every new hire at JPMorgan will undergo AI training. Erdoes also highlighted that JPMorgan bankers have already benefited from AI, particularly in reducing the time spent searching for information on potential investments. This AI-driven efficiency has reportedly saved some analysts between two to four hours of work each day.

JPMorgan’s president, Daniel Pinto, estimated that the various AI use cases across the bank could generate as much as $1.5 billion in value this year. However, alongside the rapid adoption of AI, JPMorgan has been cautious about the potential risks associated with unsupervised AI tools. In February 2023, the bank joined a growing list of companies that banned employees from using OpenAI’s flagship model due to concerns about misinformation and fabricated content.

The integration of AI in the financial sector isn’t limited to JPMorgan. Wall Street as a whole has embraced the technology, introducing AI tools across multiple functions to enhance efficiency and streamline operations. For instance, Morgan Stanley, another major financial institution, launched Debrief in June—a generative AI assistant that participates in financial advisors’ meetings with client consent. Debrief identifies actionable items, summarizes key points, drafts emails, and saves notes into Salesforce. This assistant, built on OpenAI’s GPT-4, significantly enhances the efficiency of financial advisors, allowing them to dedicate more time to client engagement, according to Vince Lumia, head of Morgan Stanley Wealth Management client segments.

Goldman Sachs has also developed its own generative AI platform, known as the GS AI Platform, which builds on the bank’s existing machine learning infrastructure. This platform provides developers with selective access to AI models. Goldman Sachs has collaborated with OpenAI-backer Microsoft to leverage GPT-3.5 and GPT-4 models, as well as Google’s Gemini model and open-source models like Meta’s Llama. Despite this integration, Goldman Sachs, like JPMorgan, has banned the use of ChatGPT by its employees.

Bank of America, the second-largest U.S. bank by assets, is similarly committed to advancing AI. The bank has earmarked $4 billion for new technology initiatives in 2024, including AI. Its virtual assistant, Erica, reached a milestone of 2 billion interactions in April, with clients engaging with the assistant approximately two million times each day.

Overall, the major players on Wall Street are increasingly integrating AI into their operations, recognizing its potential to drive efficiency and enhance client services. However, these institutions are also aware of the potential risks, particularly in terms of accuracy and the reliability of AI-generated content, and are taking steps to mitigate these concerns.

Latest articles