Next-Gen Tech

On your Generative AI journey? Here is how to navigate it

Financial services firms have been pioneers in adopting AI and machine learning for decades. Now, with Generative AI (GenAI) going prime-time, businesses need to navigate unique considerations as new tech emerges.

AI offers astounding new capabilities for businesses. Take the bond market, for example: AI now makes it possible for traders to condense 20 minutes of research into a matter of seconds. This is only the beginning of what GenAI can do. It's important to know when and how to harness its power by prioritizing security and data privacy while guarding against proprietary data leakage. As GenAI continues to reshape the industry, financial firms must embrace it thoughtfully to stay ahead. The GenAI era is here: now it’s time to figure out where and how to embrace it.

Overcoming GenAI’s potential pitfalls


Businesses have concerns about how they can overcome some of the risks and challenges of Generative Pretrained Transformer (GPT) tools, such as Large Language Model (LLM) hallucinations, the intellectual property of queries and responses, and the potential for information leakage and data security. The risk of misuse is also a factor that firms need to consider when bringing GenAI into their technology stack or embedding the tech inside their applications.

Although there are risks with GenAI — as there are with any nascent technology — there are also ways to mitigate them. To effectively address these challenges, business leaders should prioritize a comprehensive risk assessment, clear data protection policies, and robust security measures.

Financial services firms can use GenAI to:

  • Serve as a “virtual expert” for generating tailored industry insights
  • Streamline and personalize customer operations
  • Expedite software engineering and R&D
  • Enhance marketing and sales optimization

The 3 GenAI guidelines every business needs

There are multifold challenges, concerns, and guardrails individual businesses may need to solve for. That said, there are three elements of GenAI each company has to start with on its journey.

1. Protect proprietary data

Firms are concerned about inadvertently surrendering proprietary information. Many fear employees could accidentally submit sensitive business data, personally identifiable information, or proprietary knowledge into an LLM, unwittingly training the model on information it shouldn’t use when answering public queries.

Consider these issues when building or buying your GenAI tools. When partnering with a technology provider, it’s best to ensure they’re using enterprise-level tools. When LTX and Broadridge built BondGPT, a GenAI-powered app that answers bond-related questions, they chose OpenAI GPT-4. No matter which LLM they might have chosen, they would still have to implement architectural design patters to ensure the safeguarding of data.

Anonymizing data is crucial to keeping proprietary information safe. Whether building your own GenAI platform or partnering with a provider, it’s essential that the information going in and out of the system is not leaking into a public LLM, like you’d encounter with free versions of popular chatbots available online today. The more you can obfuscate the source of your data and inputs, the more likely you are to keep proprietary and sensitive data in the right hands.

2. Ensure speed, accuracy, and minimal hallucinations

Striking a balance between speed and precision with GenAI can be challenging. The need for real-time data accuracy can't be overstated, and traditional updates simply don't cut it. At the same time, you also need to make sure your model isn’t hallucinating (giving you incorrect information). There are novel ways to tackle these three problems without sacrificing speed or security.

Staying fast and accurate

Achieving speed and accuracy in GenAI applications within the financial services sector presents a significant challenge. The need for real-time, up-to-the-minute data accuracy is paramount, especially in fast-paced markets. Monthly, weekly, or even daily updates are insufficient in this context. This poses a major hurdle for even the most advanced GenAI tools.

Instead of training your model on relevant information, train it to look for information from the right sources. If your model can seek out the sources of real-time information, rather than being the custodian of that information, it’s much more likely to keep pace with market changes. If your models can find the most current information pertinent to the user’s interest, it will be more likely to give users accurate and timely insights.

Scrutinize the user’s input, scrutinize the LLM’s output

Proper GenAI design means never trusting the user’s input. If an individual were to tell an application that that a bond’s trading price is incorrect, and tries to feed the program false information, it’s vital that the LLM not take what the user writes at face value. Your GenAI model should treat the user’s input with scrutiny. This also means sanitizing what the user sends to the LLM — ignoring prompts that ask the app to provide wrong or inappropriate answers.

It’s crucial that your application never passes the LLM’s output directly to the user. Unchecked answers have a higher risk of generating and displaying incorrect information. To address this issue, create a thorough test to verify the accuracy of the LLM's responses from multiple angles. Answers that aren’t realistic should be flagged by the system, which then tells the app to never answer that question the same way (or in a similar way) again

3. Protect your brand and remain compliant

The regulatory landscape around GenAI is trying to catch up with technology. GenAI itself is capable of answering queries and delivering information on a host of subjects, but in a financial services capacity, it’s critical to teach the LLM on the existing regulations and laws that govern industry practices.

GenAI tools must:
  • Manage against risk
  • Offer transparency on how information is used
  • Protect against bias
  • Comply with consumer and employee protection laws
The financial services industry’s existing rules have set up a framework for how GenAI platforms should work. For example, models should not be designed to offer advice when compliance and legal rules forbid it. Instead, your tool could be used by financial advisors to help them determine their own recommendations to clients.

It’s critical that you design your platform to include a compliance layer, too. This can take the form of an AI agent that monitors the app’s output. The agent makes sure answers are trained on existing compliance rules, which they then enforce within the LLM’s answers. Build capabilities for your application to incorporate upcoming regulatory rules as they come into effect, allowing for real-time compliance updates.

Embracing the GenAI future — safely

When it comes to AI and GPT technology, the benefits outweigh the potential risks in most cases. AI tools help professionals analyze data in seconds, rather than minutes. They can also find operational efficiencies throughout their organization, providing a competitive advantage that translates into smarter decisions. It’s possible to build or onboard safe AI products at scale, so long as you have the right guardrails, partners, and highly trained employees steering the ship.