How to understand the core concepts of AI, LLMs and RAG!

If you find some of the different terminology used for Large Language Models (LLMs) and AI confusing, you are not alone!

This is the first in a series of articles about AI, LLMs and Retrieval Augmented Generation (RAG) where we aim to explain clearly and succinctly, some of the key terminology you might be hearing about. We hope you find these posts helpful!


What are foundation models?


A foundation model is an AI model, trained on huge amounts of data (documents, audio, images, text….). It is trained to ‘generate’ the next word as it ‘learns’ the language. It should then be specialised and fine-tuned for a wide variety of applications and tasks, which then means it is no longer a foundation model!


What are LLMs?


A LLM is an umbrella term used for all foundation and specialised models.

For example:

In the case of Llama, the foundation model is not usable directly but serves as the foundation for all the subsequent specialised models. Llama instruct is a question and answering model and code Llama is a coding assistant.

All three models are LLMs.

What are the benefits and challenges of a foundation model?

In terms of benefits: 


Flexibility and adaptability

Foundation models are flexible and adaptable as they can be be fine-tuned for a wide range of tasks, saving time and resources compared to building new models from scratch for each specific task.

Cost efficient

While foundation models are costly, once you have them, you can adapt them as many times as you want on new tasks.

Accessibility

Open source foundation models are accessible as smaller companies with less access to computational resources can leverage these models to create innovative AI applications. (Note that there are many closed models which are not accessible!)

(Note – Open source foundation models – almost anyone can use, access the source code and customise the foundation model which in theory, improves accessibility, transparency etc.  Meta’s Llama 2 is an open source foundation model.  Chatgpt is not open source. 

As for the challenges: 

Bias

Foundation models are trained on large and diverse data sets which may contain biases present in the data, and which will be mirrored in the model’s outputs.

Security and privacy

The huge amounts of data needed to train a foundation model naturally raises security and privacy concerns.  The data should be secure and handled responsibility.

Lack of transparency

Foundation models can be a ‘black box’ .  The issue with data has already been highlighted.  In addition, it is important to understand how the foundation model generates its outputs to identify any potential errors or bias.  This is a hot topic with ongoing empirical studies.

Lingua Custodia wins the Large AI Grand Challenge Award organised by the European Commission!

AI award

Lingua Custodia wins the Large AI Grand Challenge

The French Fintech company Lingua Custodia, a specialist in Natural Language Processing (NLP) applied to Finance since 2011, was delighted to receive an award in Brussels yesterday. This award, which was presented by EU Commissioner Thierry Breton, is designed to reward innovative start-ups and SMEs for devising ambitious strategies and making commitments to develop large-scale AI foundation models that will provide a competitive edge for Europe.

Together with 3 other technology SMEs, Lingua Custodia will share a prize of a total of €1 million and access to two of Europe’s world-leading supercomputers, LUMI and LEONARDO for 8 million hours. This challenge was highly competitive and received 94 proposals.

Lingua Custodia’s AI foundation models

Lingua Custodia’s winning proposal focused on developing a series of AI foundation models with 3 major objectives, using the company’s existing skills and known expertise in the AI arena:

  • Build very cost effective, fast and efficient models to run on smaller servers and democratize the technology while reducing energy consumption
  • Ensure the models can handle multilingual queries and make them available to non-English speakers
  • Tune the models for the retrieval of information (RAG) to enhance the usage of generative AI for multilingual knowledge management.
Lingua Custodia’s focus on cost and energy efficient AI foundation models


Olivier Debeugny, CEO of Lingua Custodia, declared to Thierry Breton: “Lingua Custodia is an AI company, that has raised a modest amount of capital since its launch. This has been a catalyst for our creativity and resourcefulness and we therefore have the skills to optimize everything we develop. This is why we have been working on the design of multilingual, extremely cost and energy efficient models to be applied to an AI use case with a high Return on Investment.”

About Lingua Custodia

Lingua Custodia is a Fintech company leader in Natural Language Processing (NLP) for Finance. It was created in 2011 by finance professionals to initially offer specialised machine translation.

Leveraging its state-of-the-art NLP expertise, the company now offers a growing range of applications: Speech-to-Text automation, Linguistic data extraction from unstructured documents, etc.. and achieves superior quality thanks to highly domain-focused machine learning algorithms.

Its cutting-edge technology has been regularly rewarded and recognised by both the industry and clients: Investment houses, global investment banks, private banks, financial divisions within major corporations and service providers for financial institutions.

LLMs – Generative AI is not Sci-fi!

LLMs

Lingua Custodia was delighted to co-host this event with Cosmian, a company specialised in cybersecurity, at Le Village by CA Paris.


What are LLMs?

Gaëtan Caillaut’s presentation for Lingua Custodia focused on Large Language models (LLMs) and aimed to ‘demystify’ the engineering and science behind large language models. He highlighted LLMs are a type of AI program able to recognise and generate text. These models are trained on large sets of data, which allow the models to learn the probability of generating the next word, based on the context of the word or phrase.

What are the limitations of LLMs?

The limitations of LLMs were also discussed. The quality of the text which is generated is very dependent on the underlying data and there is also a risk that these models can misinterpret the context of the words or phrase. A LLM hallucination happens where the model generates text that is irrelevant or inconsistent with the input data.
LLMs are also very expensive to run and complicated to train.

Retrieval Augmented Generation and RLHF for finetuning

He highlighted the benefit of RAG (Retrieval Augmented Generation) which references an external knowledge base to improve the accuracy and reliability of LLMs. RAG helps to enhance LLM capabilities and has the advantage of not requiring particular training.

RLHF (Reinforcement Learning from Human Feedback) is one of most used finetuning approaches. It helps the model by using human feedback to ensure the model is more efficient, logical and helpful.

Lingua Custodia’s Generative AI Multi-Document Analyser


Olivier Debeugny, Lingua Custodia’s CEO then presented the multi-document data extraction technology which uses RAG to optimise the data extraction quality.

Please note that Lingua Custodia now has a new address in Paris, Le Village by CA Paris, at 55 Rue La Boétie, 75008. We are delighted with our new offices and thrilled to be part of this dynamic eco system which prioritises supporting startups and PMEs.

Digital Finance – TQ Accelerator

Lingua Custodia is delighted to be accepted to participate in the Digital Finance program at Tech Quartier, Franfurt.

Lingua Custodia is one of 15 start ups selected to network and connect at Frankfurt financial centre.  The program brings together corporates and startups in the digital finance domain with the goal of driving innovation to create tangible solutions for the finance industry.

The program runs over a 6-week period from 28 May to 11 July 2024.

Our CEO, Olivier Debeugny has just completed the first 2 weeks of this program and has found the experience to be insightful and very investing.  The selected start ups have been encouraged to collaborate on specific finance use cases, with each start up able to share their experience and knowledge of digital AI solutions.

The networking opportunities have also helped Lingua Custodia to meet potential clients and broaden its contacts in Germany.

Olivier Debeugny participated on the discuss panel at the Network Fintech Start Up event on the 6 June, which focused on ‘Open Finance & AI: Shaping the Future’ for digital finance.  Some of the key points which were highlighted during this event were:

AI and productivity

The expectation is that the implementation of AI in the financial domain will help to boost productivity.  So, the priority use cases are for better managing risk and compliance as well as the simplification of decision making and improved forecasting accuracy. AI will also be used for optimising the client experience, though the main focus for financial companies is really productivity gains.

The challenges and risk of AI for finance

There is clear recognition of the regulatory and ethical implications, as well as data security.  The panel underlined the importance of critical thinking when reviewing AI processes.

The future outlook for AI for finance

The future is promising! 

AI should help to reduce the repetitive and low value-added aspects of individual’s roles.  The skills which will be important are adaptability and flexibility, as well as critical thinking skills.  Individuals should embrace new AI technologies while aiming to understand them and being aware of the importance of ethics, diversity and security.

Generative AI Data Extraction for Due Diligence Reviews

Due Diligence Review

Solution

The Multi-Document Analyser enables the legal team to upload and analyse these legal documents. They can input queries like “Identify non-disclosure agreements.” The tool extracts relevant sections, translates them if necessary, and presents clear answers, helping lawyers expedite the review process and identify critical legal information

Generative AI Finance Document Processing

Challenge

 A law firm is conducting due diligence for a merger or acquisition, involving a vast number of legal documents, contracts, and agreements in various languages. Legal experts need to quickly identify clauses related to liabilities, intellectual property, and compliance issues to assess potential legal risks.

Customer

Legal and Compliance Team

Services Provided

Multi-Document Analyser
Translation technologies 

Financial Document Analysis

Financial Document Analysis for Investment Decisions

Solution

Lingua Custodia’s Generative AI Multi-Document Analyser allows the analyst to upload these documents, input specific queries (e.g., “Show me revenue growth trends”), and receive concise answers in natural language. The tool’s multilingual support and data extraction capabilities streamline the analysis process, enabling the analyst to make more timely and data-driven investment choices as they can ask their queries in their native language and receive the responses in their native language, even if the documents are in other languages. 

Challenge

A financial analyst at a large investment firm needs to quickly extract critical information from a diverse range of financial documents, including annual reports, earnings transcripts, and regulatory filings, in multiple languages. They need to identify key financial metrics, such as revenue growth, profitability, and risk factors, to make informed investment decisions

Customer

Financial Products Team

Services Provided

Multi-Document Analyser
Translation technologies 

Lingua Custodia’s Generative AI Multi-Document Analyser

 

Multi-Document and Multi-Lingual Data Extraction 

Lingua Custodia’s Multi-Document Analyser is now in production and can be accessed via the secure platform or through API.
This means that a group of documents can be uploaded at the same time, with key information extracted within seconds. As this technology is also integrated with our machine translation engines, it is possible to load your documents in different languages and extract your data using a different language prompt.

As an example, French, Chinese and Spanish documents can be uploaded and you can ask your queries in English or any other language which is supported by the platform.

You can upload up to 15 pdf documents together. The technology has been optimised to read and extract information in tables and to respond swiftly to queries.

What are the uses cases for the Multi-Document Analyser?

The use cases for the multi-document analyser include financial product and regulation queries, client support queries, requests for proposals and due diligence questionnaires. So, it is possible to upload a group of Key Investment Documents and extract the risk performance indicators and other details.

Lingua Custodia focuses on innovation

Lingua Custodia is very proud of this highly innovative technology which is part of a suite of financial processing services it provides. It was set up by 2 financial professionals in 2011, originally to provide specialised machine translation services, having identified a clear use case for this service.  Since 2020, new technologies, such as speech to text and data extraction, have been added progressively.

 Its aim is to be the market leader in financial document processing for financial institutions, and Lingua Custodia is distinguished by its focus on data security as it recognises that this is a priority for its clients.  Data is stored on bare metal servers in Europe.

 

Why AI will not be replacing humans anytime soon!

Lingua Custodia

Why AI will not be replacing humans anytime soon!

The last 18 months has seen dramatic developments in the arena of Artificial Intelligence (AI). The emergence of Large Language models such as ChatGPT, which can analyse, respond and generate text was a major event.  This then led to the rapid emergence of other models, focused on sentiment analysis, image and voice recognition.  

This has understandably led to concerns about the impact of these innovations on the human workforce. Will AI innovations make humans redundant?!

At Lingua Custodia, we feel strongly that the response is no. These technologies will boost productivity and create new job opportunities.  AI is to be embraced rather than feared!


AI and humans learn differently

Large language models use queries or prompts, based on mathematical formulas to process and identify patterns in a huge volume of data.  These prompts are then converted to text outputs.  

These models learn by correlation, so for example, they can link 2 variables – such as studying and grades, but a human brain learns by causation – that the change in one variable can impact the other one – so if you study, you might get better grades, whereas if you do not study, your grades might suffer. 

So, AI and human brains do not learn in the same way – they are different.  The AI may well be able to process huge volumes of data faster than a human brain, but the human brain can identify causation as well as adding layers of creative thought, consciousness, and ethics. 

AI should be used to boost productivity

Future job roles will use AI to as a tool to boost productivity.  So, an engineer might use AI to check their code for potential errors.  In terms of the financial industry, which is championing AI, it can be used to identify risk, rapidly analyse investment opportunities and optimise client services through the use of chatbots.

The Lingua Custodia platform which is specialised for the financial services sector, contains several AI technologies which are all focused on adding value for our clients. Our secure platform allows the rapid communication, extraction and analysis of data in different languages. For example, machine translation technology translates documents and text within seconds, while our Document Analyser, rapidly extracts key data from large pdf documents.

Lingua Custodia features on the Wavestone radar for French Generative AI Startups 2023

Generative AI

Lingua Custodia was delighted to feature on the Wavestone Radar for French Generative AI start ups in 2023.

What is the Wavestone Radar?

Wavestone is an international global consulting company, which has a start up accelerator focusing on emerging trends in the startup ecosystem. It shares the results of these market insights through the publication of Wavestone Startup Radars.

What is Generative AI?

Generative AI is a type of model trained to spot pattens in data, which enables it to then generate new content based on the previous patterns.  So, for example in the finance industry, Generative AI models can be used to analyse trading and investment data, identifying patterns to generate trading opportunities.

Lingua Custodia

Lingua Custodia is included on the Generative AI radar within the ‘Gestion de la Connaissance’ category, because of its focus on financial document processing and its range of technologies for data extraction and analysis.

It is a huge achievement to feature on this radar.  Lingua Custodia was initially created in 2011 by finance professionals to offer specialised machine translation.

Leveraging its state-of-the-art NLP expertise, the company now offers a growing range of financial document processing solutions in addition to its initial Machine translation technology.

Lingua Custodia’s document analyser, uses Generative AI applied to a large language model (LLM) model to search for specific information in confidential documents, extracting and then summarising the information.

The key advantages of our document analyser is the rapid extraction of the relevant data in response to a series of queries. The document analyser is multi-lingual, available in 10 languages. This allows you to query a document in a different language to the one it is written it. The use cases for the document analyser include requests for proposals, regulatory, compliance, research and security documents.

The source references are also included which helps with verifying and checking the accuracy of the responses.  The ability to query several documents at the same time will be developed and live on the platform by the end of Q1 2023.

The EU AI Act – Supporting innovation and building trust across the financial services industry.

The EU AI Act – Supporting innovation and building trust across the financial services industry

The EU AI Act was agreed by the European parliament in December 2023 and the financial text is likely to be published in early 2024.

This act will apply to all industries across the European Union and is aimed at continuing to foster innovation while ensuring the protection of individual’s rights, through stricter regulation of high-risk AI technologies and the promotion of transparency and trust across AI technologies.

It recognises that innovation is essential for competitiveness, so this Act also includes the creation of regulatory sandboxes to facilitate the development, testing, and validation of innovative AI systems under strict regulatory oversight.

The Act establishes rules and obligations for AI technology, based on the potential risk to the user and society.  Five risk levels are defined, with stricter obligations for technologies deemed to be at higher risk.

Technologies with an unacceptable risk are banned, such as systems which aimed at exploiting vulnerabilities or behavioural manipulation. Technologies which are deemed as high risk, with the potential to impact on fundamental rights, democracy and health and safety, will be required to comply with extensive governance activities to ensure these technologies are compliant with the Act.

AI systems which are categorised as limited risk, will need to ensure that they are fully transparent, this means for example, that users should be aware if they are interacting with an AI chatbot or a human.

What does the EU AI Act mean for financial services?

Many of the AI technologies used across the financial services industry fall into the high-risk category, such as trading algorithms, risk analysis and credit scoring.  The onus will be on financial institutions to demonstrate that their models can be understood and that their underlying data is unbiased and of good quality. 

Matching the requirements of the EU AI Act has the advantage of winning consumer trust, as consumers are becoming very aware of the importance of ethical AI and the need to respect their rights and privacy.

While the EU AI ACT might take two years before coming into force, financial institutions should act now to analyse their AI technologies and make any necessary changes to comply with the required obligations.


Lingua Custodia’s Generative AI Document Analyser


Our latest generative ai financial document processing technology, our document analyser allows the rapid extraction of key data from large pdf documents such as the EU AI Act. It’s fully secure, like our other technologies, multilingual, and provides the source referencing!

You can test it here!