Saturday, July 27, 2024

Python LangChain Course 🐍🦜🔗 RCI and LangChain Expression Language (6/6)

[ad_1]

Python LangChain Course 🐍🦜🔗

Welcome again to half 6! On this half, we’re going to take a look at the LangChain Expression Language (LCEL) for chaining collectively components to create an LLM chain that we are able to run. You’ve seen this syntax briefly partially 1 however we haven’t gone a lot into it after that. On this half, you’ll learn to construct a posh LLM chain with a number of layers interwoven with one another. Regardless of this complexity, it can nonetheless be very readable and straightforward to know due to the LangChain expression language.

Earlier than we get began although, as we’re going to be constructing an RCI chain, let’s speak about what an RCI chain is and why it’s helpful. RCI stands for:

Recursive
Criticism and
Enchancment

What this principally means is that we’re going to ask a query to the language mannequin, ChatGPT in our case. Then we’re going to get a criticism on this reply, in search of any issues, errors, or areas the place this reply may be improved. Then we name ChatGPT once more and ask it to enhance its reply based mostly on the critique.

So who’s going to be doing the critiquing? Nicely, it seems that Giant Language Fashions are surprisingly good at critiquing themselves! So we’re going to ask ChatGPT to critique its personal reply after which enhance its reply based mostly on the critique. That is the essential concept behind RCI.

Why is this convenient? As you may learn on this paper (https://arxiv.org/pdf/2303.17491.pdf), which I feel is the origination of the time period RCI itself, it is extremely efficient at fixing pc duties and extra difficult reasoning issues. The way forward for AI is clearly going to contain increasingly more AI doing issues for us, particularly pc duties. So these researchers appear to be on to one thing with this RCI concept, and I’ve really already seen this utilized in real-life purposes as nicely, however extra on that later.

Let’s construct stuff!

So let’s get began with a sensible instance, earlier than we get too caught in principle, and create a brand new folder known as ‘6_RCI_and_LCEL‘ and inside we’ll create a brand new file named ‘1_RCI_chain.py‘ like this:

📁Finx_LangChain
    📁1_Summarizing_long_texts
    📁2_Chat_with_large_documents
    📁3_Agents_and_tools
    📁4_Custom_tools
    📁5_Understanding_agents
    📁6_RCI_and_langchain_expression_language
        📄1_RCI_chain.py
    📄.env

The file construction for this a part of the tutorial will likely be fairly easy, to offer you a break after that final one 😉!

Inside our ‘1_RCI_chain.py‘ file we’ll begin by importing what we’ll want, as standard:

from dataclasses import dataclass
from typing import Non-compulsory

import langchain
from decouple import config
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
    SystemMessagePromptTemplate,
)
from langchain.schema.output_parser import StrOutputParser

We’re going to make use of Python’s built-in dataclasses and Non-compulsory kind to outline a easy knowledge construction, we import langchain so we are able to talk about the debug function later, config as all the time, and ChatOpenAI. We additionally import some immediate template comfort courses which can make it simpler to create our prompts from the templates, and a StrOutputParser which can principally simply return the string output of the LLM again to us from the JSON object ChatGPT sends again.

Now let’s arrange our ChatGPT API:

chatgpt_api = ChatOpenAI(
    mannequin="gpt-3.5-turbo", temperature=0, openai_api_key=config("OPENAI_API_KEY")
)

RCI_log dataclass

Earlier than we dive into the chain, let’s use a easy dataclass to outline our personal easy knowledge construction. We are going to use this sort of knowledge construction to symbolize and retailer all of the levels of an RCI name. The query, preliminary reply, critique, and last reply. We’ll name this knowledge class an RCI_log and outline it under:

@dataclass
class RCI_log:
    query: str
    initial_answer: Non-compulsory[str] = None
    constructive_criticism: Non-compulsory[str] = None
    final_answer: Non-compulsory[str] = None

    def dict(self):
        return self.__dict__.copy()

We use Python’s @dataclass decorator to outline a dataclass, which is principally only a class that’s used to retailer knowledge. We are able to outline the fields of the dataclass within the class definition, after which we are able to create situations of this class and retailer knowledge in them. We identify our dataclass RCI_log and outline the fields query, initial_answer, constructive_criticism, and final_answer.

The query is of kind string and required, whereas the opposite three fields use the Non-compulsory kind with [str] within the brackets. Which means that these three fields ought to both be a string, or they need to be ‘None‘, which makes them non-compulsory. We set the default worth for these three fields to None, as this can permit us to create an RCI_log object passing in simply the query, after which we are able to fill out the opposite fields later.

We are able to additionally outline strategies within the class, which we do right here with the ‘dict‘ technique. This technique simply returns a replica of the .__dict__ attribute. The .__dict__ attribute is a particular attribute in Python that returns a dictionary containing the attributes and their values of an object. It’s used to entry the inner dictionary that holds the occasion attributes of an object and principally exposes what’s really saved in reminiscence. We are able to use this to get a dictionary illustration of our RCI_log object, which will likely be helpful later.

One advantage of this dataclass over simply utilizing a standard dictionary is that your type-checker (you probably have one turned on), will complain if we by chance mistype a property or attempt to set one which doesn’t exist and we additionally get IntelliSense autocompletion as a result of we’ve got predefined the properties. This dataclass is a bit overkill for simply this small tutorial however for those who’re working with massive initiatives passing a great deal of kinds of knowledge round this may actually assist hold issues organized.

Creating our RCI chain

Anyway, that was just a bit detour, let’s get cracking on our RCI LLM chain! Let’s declare a perform that may run our RCI chain, under and outdoors of the dataclass indentation:

def run_rci_chain(query: str) -> RCI_log:
    log: RCI_log = RCI_log(query)

Our perform named run_rci_chain takes a query as a string and returns an RCI_log, which is the kind we simply outlined above utilizing our dataclass. We create an occasion of this dataclass and retailer it in a variable named log, passing within the query as the primary argument.

Now outline an inside technique inside this technique:

def run_rci_chain(query: str) -> RCI_log:
    log: RCI_log = RCI_log(query)

    def combine_system_plus_human_chat_prompt(
        sys_template: str, human_template: str
    ) -> ChatPromptTemplate:
        return ChatPromptTemplate.from_messages(
            [
                SystemMessagePromptTemplate.from_template(sys_template),
                HumanMessagePromptTemplate.from_template(human_template),
            ]
        )

We outline a perform known as combine_system_plus_human_chat_prompt which takes a system template and human template as strings and can return a ChatPromptTemplate object. What’s a ChatPromptTemplate object? Go forward and hover over the item identify in your IDE and also you’ll see that it’s principally only a listing of tuples with a message historical past.

This perform then returns a ChatPromptTemplate created with the .from_messages technique, which takes a listing of messages as an argument. We create this listing of messages by utilizing the .from_template technique for each a system message and a human message passing in our system and human templates (we haven’t created these but, however no matter was enter into the perform).

It will principally simply return an object like this, with no matter system and human immediate templates we handed in:

[
    ("system", "You are a helpful AI bot.... blabla setup instructions"),
    ("human", "Whatever we want the LLM to do for us on this ChatGPT call."),
]
# Don't put in your file #

That’s all a ChatPromptTemplate object is, a mixture of a number of immediate templates right into a chat historical past kind listing of tuples, with a task like “system” or “human” assigned for every message. So now we might want to create a number of of those ChatPromptTemplate objects for each name we are going to make to ChatGPT.

Nonetheless contained in the run_rci_chain perform, however exterior the inside perform:

def run_rci_chain(query: str) -> RCI_log:
    log: RCI_log = RCI_log(query)

    def combine_system_plus_human_chat_prompt():
        .....

    initial_chat_prompt = combine_system_plus_human_chat_prompt(
        "You're a useful assistant that gives individuals with appropriate and correct solutions.",
        "{query}",
    )

We create our initial_chat_prompt, by utilizing the perform we simply created to mix a system immediate of "You're a useful assistant that gives individuals with appropriate and correct solutions." after which a human immediate of regardless of the person’s query or enter was by changing the {query} placeholder. Once more, this can simply return a listing of tuples with the primary tuple holding the ("system": "directions") system position and message and the second tuple holding the ("human": "query") human position and message.

After we ask the preliminary query we might want to get a critique on the reply we bought, so let’s arrange that immediate as nicely:

    critique_chat_prompt = combine_system_plus_human_chat_prompt(
        "You're a useful assistant that appears at a query and it is given reply. You will see out what's unsuitable with the reply and provides a critique.",
        "Query:n{query}nAnswer Given:n{initial_answer}nReview the reply and discover out what's unsuitable with it.",
    )

We run our perform once more to create the second ChatPromptTemplate object however this time the system immediate with ChatGPT’s directions is totally totally different. We ask it to seek out out what’s unsuitable and critique the primary reply. (If there’s nothing unsuitable with it, it can inform us). We then feed it the unique query and the reply that was given.

Now, we simply want one other ChatPromptTemplate with a system and person message for the third and last name:

    improvement_chat_prompt = combine_system_plus_human_chat_prompt(
        "You're a useful assistant that may have a look at a query, its reply and a critique on the reply. Primarily based on this reply and the critique, you'll write a brand new improved reply.",
        "Query:n{query}nAnswer Given:n{initial_answer}nConstructive Criticism:n{constructive_criticism}nBased on this data, give solely the proper reply.nFinal Reply:",
    )

This time we ask it for a brand new and improved reply, based mostly on the query, the preliminary reply, and the constructive criticism which all of us feed into the template utilizing {placeholders}.

LangChain Expression Language

So what is that this LangChain Expression Language that we’ll be utilizing? It’s really quite simple, and also you’ve already seen it partially 1 of the tutorial.

chain = immediate | mannequin
# Don't put in your file #

Expression language permits you to compose chains in LangChain by merely utilizing the | pipe operator. So the above merely implies that the immediate feeds into the mannequin. It’s sort of just like the pipe operator in Bash, the place the output of the primary merchandise is ‘piped’ into the enter of the second, making a ‘chain’ for Giant Language Fashions, therefore the identify ‘LangChain’.

So allow us to do this out, and create a series to run our first initial_chat_prompt by means of ChatGPT, nonetheless persevering with contained in the run_rci_chain perform:

def run_rci_chain(query: str) -> RCI_log:
    .....
    .....
    .....

    initial_chain = initial_chat_prompt | chatgpt_api | StrOutputParser()

So we declare a brand new LangChain chain named initial_chain, and we use the | pipe operator to pipe the initial_chat_prompt containing our preliminary system message and the human message containing the person question into the chatgpt_api after which pipe the output of that into the StrOutputParser. Once more, the StrOutputParser will merely return the LLM’s last reply to us.

So if we name this primary chain, we are going to get our first reply, however there may be one very last thing we have to name a series like this. We’ve really additionally seen this partially 1. The preliminary chat immediate has the {placeholder} values in there, which must be changed by our values. We move these in utilizing a easy dictionary. So we may invoke our preliminary chain like this:

# Instance, don't hold in your code #
reply = initial_chain.invoke({"query": "What's an elephant?"})

And this might completely work. (Be aware that for those who insert this into your file and run it you’ll return nothing because the run_rci_chain perform shouldn’t be known as anyplace but, we’ll try this later). Nevertheless, do not forget that we create the RCI_log knowledge kind that simply so occurs to comprise entries for all of the variables our immediate templates will want! (Which is after all no coincidence.) We additionally gave it a handy perform to output all its knowledge to a dictionary.

So the above may be changed by:

reply = initial_chain.invoke(log.dict())

Which merely passes within the dictionary type of the info inside our RCI_log, after which calls the chain. Handy! We now have:

initial_chain = initial_chat_prompt | chatgpt_api | StrOutputParser()
reply = initial_chain.invoke(log.dict())

This reply will comprise our preliminary reply, which we have to retailer in our RCI_log object, so let’s change the code:

initial_chain = initial_chat_prompt | chatgpt_api | StrOutputParser()
log.initial_answer = initial_chain.invoke(log.dict())

We invoke the preliminary chain, passing in our log dictionary which solely has the query in it, and in return, we get the preliminary reply, which we retailer in our log.initial_answer subject. Be aware how easy and readable that is, due to the LangChain expression language. So now let’s add the second step.

critique_chain = critique_chat_prompt | chatgpt_api | StrOutputParser()
log.constructive_criticism = critique_chain.invoke(log.dict())

We create a critique chain, utilizing the critique immediate we already arrange, pipe it into ChatGPT, after which into the string output parser. We invoke this chain, passing in our log dictionary, which by now comprises each the query and the preliminary reply, permitting the critique chat immediate’s {placeholders} to be stuffed in. We then retailer the output of this chain in our log.constructive_criticism subject.

Now for the final one:

improvement_chain = improvement_chat_prompt | chatgpt_api | StrOutputParser()
log.final_answer = improvement_chain.invoke(log.dict())

We do precisely the identical once more, creating our last chain and passing in our log‘s dictionary which has all three values wanted by at times we retailer the ultimate reply in our log dataclass.

Now let’s add a print assertion for some good readable output:

print(
    f"""
    Query:
    {log.query}

    Reply Given:
    {log.initial_answer}

    Constructive Criticism:
    {log.constructive_criticism}

    Remaining Reply:
    {log.final_answer}
    """
)
return log

We print a multi-line string with all the info in our RCI_log object, and at last, we additionally return the log, as we promised on declaring this perform that we might return an object of kind RCI_log (-> RCI_log) and we must always hold our guarantees!

So right here’s the entire run_rci_chain perform:

def run_rci_chain(query: str) -> RCI_log:
    log: RCI_log = RCI_log(query)

    def combine_system_plus_human_chat_prompt(
        sys_template: str, human_template: str
    ) -> ChatPromptTemplate:
        return ChatPromptTemplate.from_messages(
            [
                SystemMessagePromptTemplate.from_template(sys_template),
                HumanMessagePromptTemplate.from_template(human_template),
            ]
        )

    initial_chat_prompt = combine_system_plus_human_chat_prompt(
        "You're a useful assistant that gives individuals with appropriate and correct solutions.",
        "{query}",
    )
    critique_chat_prompt = combine_system_plus_human_chat_prompt(
        "You're a useful assistant that appears at a query and its given reply. You will see out what's unsuitable with the reply and provides a critique.",
        "Query:n{query}nAnswer Given:n{initial_answer}nReview the reply and discover out what's unsuitable with it.",
    )
    improvement_chat_prompt = combine_system_plus_human_chat_prompt(
        " "You're a useful assistant that may have a look at a query, its reply and a critique on the reply. Primarily based on this reply and the critique, you'll write a brand new improved reply.",
        "Query:n{query}nAnswer Given:n{initial_answer}nConstructive Criticism:n{constructive_criticism}nBased on this data, give solely the proper reply.nFinal Reply:",
    )

    initial_chain = initial_chat_prompt | chatgpt_api | StrOutputParser()
    log.initial_answer = initial_chain.invoke(log.dict())

    critique_chain = critique_chat_prompt | chatgpt_api | StrOutputParser()
    log.constructive_criticism = critique_chain.invoke(log.dict())

    improvement_chain = improvement_chat_prompt | chatgpt_api | StrOutputParser()
    log.final_answer = improvement_chain.invoke(log.dict())

    print(
        f"""
        Query:
        {log.query}

        Reply Given:
        {log.initial_answer}

        Constructive Criticism:
        {log.constructive_criticism}

        Remaining Reply:
        {log.final_answer}
        """
    )

    return log

After all, in an actual venture, we in all probability mustn’t retailer the templates contained in the perform itself, however retailer them in some sort of knowledge object of their very own, however I don’t need to pollute this tutorial with too many distractions. So let’s give our RCI chain a take a look at!

Testing our RCI chain

We’re going to be asking a trick query that will likely be exhausting even for many people to reply, because it’s associated to a selected area of interest and explicit individuals. Add the next:

query = "who was the primary man to win 9 consecutive races in method 1?"
print(run_rci_chain(query))

After which run the file and I get:

Query:
who was the primary man to win 9 consecutive races in method 1?

Reply Given:
The primary man to win 9 consecutive races in Method 1 was Alberto Ascari. He achieved this exceptional feat between 1952 and 1953.

Constructive Criticism:
The reply offered is wrong. Whereas Alberto Ascari was certainly a profitable Method 1 driver, he didn't win 9 consecutive races. The right reply to the query is Sebastian Vettel. He achieved this spectacular feat between 2013 and 2014, profitable 9 consecutive races.

Remaining Reply:
The primary man to win 9 consecutive races in Method 1 was Sebastian Vettel. He achieved this feat between 2013 and 2014.

This can be a difficult query. It’s a particular subject and a really particular query, and Alberto Ascari didn’t win 9 in a row, as a result of he technically didn’t compete in a race in between. However, Method 1 racing trivia apart, the purpose is that even ChatGPT-3.5-turbo may be fooled. However, fairly impressively sufficient, it’s able to find its personal mistake and correcting its personal wrongs! That is the thought behind RCI.

Sensible makes use of for RCI

So why is this convenient? How is that this utilized in actual life? Certainly simply asking trick questions shouldn’t be the one use proper? It may be used for significantly troublesome questions. As an illustration, the coding helper “Github Copilot” which integrates in VScode makes use of this method. It’ll generate one thing for you for those who ask it for a repair or assist after which go over it once more utilizing one other move similar to our RCI-chain to catch errors it has made and supply an improved model of the reply immediately. It doesn’t all the time provide you with a helpful or good reply, however nor does it have to, it’s not an ideal software. However the RCI mechanism on this case considerably improves the chance of the reply being helpful or at the least pointing the coder in the best course.

As well as, because the article linked in the beginning of this tutorial mentions, it’s exhibiting promise in executing pc duties. LLMs have a tendency to seek out it exhausting to compile the proper steps within the appropriate order instantly as an alternative of merely compiling a listing of a few of the steps in some order, which as you nicely know won’t work for pc duties. You want all of the steps and also you want them to be within the appropriate order. This makes it particularly difficult to have an LLM reliably execute pc operations as an alternative of a human operator. However that is the place the longer term is inevitably going, and RCI and comparable approaches are trying like a stepping stone in that course.

LangChain’s debug setting

Earlier than we wrap up this LangChain course and ship you off into the wild to construct your individual LangChain stuff, I need to provide you with another software in your toolbag as a LangChain developer. I’ve intentionally left this out of the course as far as the output may be fairly overwhelming and extra complicated than useful at first, however now that you’ve an excellent grasp of LangChain, let’s discuss in regards to the debug function.

Whenever you’re constructing advanced chains and purposes, generally it’ll not fairly work as you anticipate and also you need to really see all the pieces that is occurring below the hood. That is the place LangChain’s debug setting will come to the rescue. Scroll again as much as the highest of your file and straight under the imports add the next line:

langchain.debug = True

Now for those who run your file once more, you will notice all the pieces and I imply all the pieces in your console! You will notice each single name to ChatGPT and its JSON responses that have been obtained together with the token utilization, the enter for every chain and immediate that was generated, and so forth. If one thing goes unsuitable someplace however you can’t precisely pinpoint the place, that is helpful to essentially work out on a granular stage the place try to be in search of a bug in your code.

In order that’s it for this LangChain tutorial collection! I actually hope you loved it and discovered loads. As all the time, it was my pleasure and an honor, and I hope to see you once more within the subsequent one!


This tutorial is a part of our unique course on Python LangChain. Yow will discover the course URL right here: 👇

🧑‍💻 Authentic Course Hyperlink: Turning into a Langchain Immediate Engineer with Python – and Construct Cool Stuff 🦜🔗

Authentic article on the Finxter Academy

💡 Be aware: You’ll be able to watch the complete course video proper right here on the weblog — I’ll embedd the video under every of the opposite components as nicely. In order for you the step-by-step course with code and downloadable PDF course certificates to indicate your employer or freelancing shoppers. observe this hyperlink to study extra.

[ad_2]

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles