LangChain is an open-source developer framework for building LLM applications. It is available in Python and Javascript. This framework is mainly focused on composition and modularity.
It comprises multiple components: prompts, models, indexes, chains, and agents. Models refer to language models. Prompts refer to the style of creating inputs to pass to the models. The parser parses it into a more structured format.
WHY LangChain
When we build applications using LLM, there will often be reusable models. We repeatedly prompt a model, and parses outputs. LangChain gives an easy set of abstractions to do this type of operation.
Key components of LangChain
- Prompts are templates for input text fed into LLMs. It helps to create and manage prompts efficiently.
- Chains are sequences of calls to LLMs or other utilities, used to build complex workflows by combining multiple steps. eg. querying database and generating steps.
- Agents are autonomous entities that use LLMs to perform tasks, useful to design to handle specific tasks eg conducting research, or performing actions based on user inputs.
- Models are OpenAI models used for generating and processing input. eg GPT-3 models, GPT-4 models.
LangChain provides higher-level abstractions, gives flexibility with the support of multiple LLMs, and helps to build scalable applications.
Key APIs provided by LangChain
- LLM APIs are used to generate text and complete the prompt using the LLM.
- Prompt APIs are used to create, manage and format the prompt templates.
- Chain APIs allow chaining multiple LLM calls or other operations together.
- Agent APIs designed to create autonomous agents that perform tasks using LLMs or other tools.
- Tool APIs allow integration of external tools and services used by agents.
- Memory APIs are used to store and retrieve information that needs to be persisted across multiple interactions or sessions.
- Document APIs are used for handling documents, integrating with LLMs for tasks such as information retrieval and summarization.
Here is the code snippet to create a chat model:
ChatOpenAI() function is used to create a chat model by passing the LLM model and the temperature values.
Here is the sample code snippet to interact with the chat model:
ChatPromptTemplate.from_template() API creates a prompt template from a template_string string. This string should contain place holder for the dynamic content. prompt_template.format_messages() formats the prompt_template with the specific values for the placeholders (style and text). Note that template_string contains the dynamic contents of variables style and text. chat() sends the formatted messages to the chat model.
Parsing the LLM output string
langchain.output_parsers module provides APIs for parsing and structuring the LLM output. This can be used when you need the model’s response in a specific format or to extract specific information from a response. This module contains two key classes: ResponseSchema defines the schema of expected response including the type and format of each field. StructuredOutputParser parses the model’s output according to the specified schema.
Here is the sample code snippet to define a schema using ResponseSchema(), then parsing the output and getting the structured output. response is the content of the output model:
Chains
The most important key block of LangChain is chains. Chain combines LLM with a prompt and you can add a bunch of these chains together to carry out a sequence of operations on a given text or data.
LLM Chain: It is a combination of LLM and prompt. This chain runs through the prompt and LLM, parser, etc. sequentially.
Here is the sample code snippet to use LLM Chain:
chain is defined with the combination of LLM model and the prompt.
Sequential chains are another type of chain used to combine multiple chains where the output of one chain is the input of the next chain. There are two types: SimpleSequentialChain contains a single input/output and SequentialChain can have multiple inputs/outputs.
Here is the sample code snippet for usage of simple sequential chains:
Here is the sample code snippet to use sequentialchain:
Router Chain
Route chain is similar to switch cases in our programming. It’s a powerful concept that allows to route inputs to different chains based on the logic. This is useful when you want to handle various types of inputs or tasks in a single framework.
Below are some of the sample steps to create a router chain:
- Installing necessary packages
- Importing classes:
- Setting up a language model:
- Defining Prompt templates and individual chains:
- Defining the router chain:
- Using the router chain:
Introducing Jyo Services — Your Go-To Partner for Tech Solutions
Are you looking to boost your career or bring your tech ideas to life? Jyo Services offers a range of specialized solutions tailored just for you:
- One-on-One Interview Preparation: Tailored coaching for freshers and professionals with up to 7 years of software development experience.
- Custom Software Development: Expertise in embedded technologies, AI/ML, Gen AI, Cyber Security, Linux-based applications, and network protocol developments.
- Patent Assistance: Comprehensive support for drafting, filing and maintaining patents in India.
- College Projects: Professional guidance for Engineering students on academic projects.
Unlock your potential with Jyo Services! Contact us today to get started.
DM me at: www.linkedin.com/in/jyothivs or jyos.v.s@gmail.com
References:
learn.deeplearning.ai