LangChain

Build LLM applications with LangChain

Integrating Open Source Models with LangChain

LangChain is a popular framework for building LLM applications. This guide shows you how to integrate open source models with LangChain.

#

Installation

Install LangChain and required dependencies:

bash
pip install langchain langchain-community transformers torch

#

Basic Usage

Connect to open source models using HuggingFace:

python
from langchain.llms import HuggingFacePipeline
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

model_id = "meta-llama/Llama-2-7b-chat-hf" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id)

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512) llm = HuggingFacePipeline(pipeline=pipe)

response = llm("What is artificial intelligence?") print(response)

#

Building RAG Applications

Create retrieval-augmented generation systems:

python
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import RetrievalQA

embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2") vectorstore = FAISS.from_texts(["Document 1", "Document 2"], embeddings) qa_chain = RetrievalQA.from_chain_type(llm=llm, retriever=vectorstore.as_retriever())

#

Best Practices

  • Use streaming for real-time responses
  • Implement caching to reduce API calls
  • Monitor token usage and costs
  • Handle errors gracefully with retry logic