There are clear advantages for companies in using generative AI tools – they can provide financial analysis, forecasting and report generation.  Chat bots are another area where generative AI can offer human-like interactions, responding to queries and generating content in the form of emails and documents, providing a fully optimised client service response model.

While companies can see clear advances for productivity in terms of using generative AI chatbots, concerns remain regarding privacy and the protection of confidential financial data.  

Open AI and privacy

Open AI tools such as Chat GPT, will reuse the prompts input when using their platform to train their models, unless the opt out option is used. 

 This is one of the reasons for several large companies such as Apple and Samsung restricting their employee’s usage to generative AI models, because of the potential risk of employees inadvertently sharing proprietary or confidential data. 

Microsoft Bing Enterprise

Microsoft Bing Enterprise was developed in response to these concerns, as the chat access is not saved, ensuring that data remains private.  This distinguishes it from other open Chat bots which are built on more open models.   Bing Chat Enterprise will provide a similar user experience to Bing Chat. providing answers with citations as well as visual answers including charts and images. It’ll be available free with existing Microsoft 365 E3, E5, Business Standard, and Business Premium subscriptions, and the company also plans to sell a standalone subscription in the future.

Other solutions for companies include developing in-house chat bots, based on their own data, which ensures their data stays in house. 

Data and accuracy

In addition to privacy, concerns over the use of generative AI tools generals are relate to accuracy and hallucinations. The data on which the tool is based needs to be ‘clean’ and of good quality for the tool generate correct content.  

Lingua Custodia and Generative AI


Lingua Custodia has been working on generative AI models for its financial clients for several years, and as a specialist in the financial industry, it is very aware of the importance of ensuring its clients data remains private.  Lingua Custodia’s Data team play a key role in ensuring the underlying data is cleaned and good quality, which is fundamental to the accuracy and reliability of the responses. If models use unreliable data, then this will have a strong impact on the output data quality.

Recommended Posts

No comment yet, add your voice below!


Add a Comment

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *