CrewAI on VertexAI Reasoning Engine¶
Author(s) | Christos Aniftos |
Reviewer(s) | Sokratis Kartakis |
Last updated | 2024 11 14 |
This demo uses the default crewai project skeleton template to allow the use of Gemini model.
CrewAI is an open-source framework designed to make it easier to develop and manage applications that use multiple AI agents working together. Think of it like a team of specialized AI "workers" collaborating to achieve a common goal.
At the time of the demo creation we used crewai version 0.63.6 and therefore some of the changes we mentioned might be outdate in future versions.
We explicetly define library versions in order to avoid breaking this demo in the future.
If you want to know more about starting a new CrewAI project from template look here: Starting Your CrewAI Project .
Installing dependencies¶
First we need to install crewai which comes with a CLI command to start a new project. Additionally CrewAI is using poetry to manage dependencies.
Lets install those 2 packages
!pip install vertexai
!pip install -q 'crewai[tools]==0.63.6' 'poetry'
Here you can define your CrewAI Project Name.
CREWAI_PROJECT_NAME = "gcp_crewai" # @param {type:"string"}
Now lets create a crewai project. The code below makes sure it resets the directory where this notebook runs. Even though the first time running this notebook we will be in the notebooks current path, however in a cell below after we create the crewai project we get into our project directory once that is created.(i.e CD CREWAI_PROJECT_NAME
). As a result future executions of this notebook need to reset to the default path.
HOME = get_ipython().getoutput('pwd')
if (HOME[0].endswith(CREWAI_PROJECT_NAME)):
%cd ..
HOME = get_ipython().getoutput('pwd')
!crewai create crew {CREWAI_PROJECT_NAME}
Okey now that we created our crewai project lets switch directories and get into our project dir.
p.s: You can see the created project folder in the file explorer on the left.
%cd {HOME[0]}/{CREWAI_PROJECT_NAME}
!ls -la
Install project dependencies¶
The following command will install them in case they did not install during addition
!poetry install
PROJECT_ID = "YOUR_PROJECT_ID_HERE" # @param {type:"string"}
LOCATION = "us-central1" # @param {type:"string"}
STAGING_BUCKET = "gs://YOUR_STAGING_BUCKET_HERE" # @param {type:"string"}
Create the Bucket if it does not exist:
!set -x && gsutil mb -p $PROJECT_ID -l $LOCATION $STAGING_BUCKET
Authenticate user¶
The method for authenticating your Google Cloud account is dependent on the environment in which this notebook is being executed. Depending on your Jupyter environment, you may have to manually authenticate.
Refer to the subsequent sections for the appropriate procedure.
1. For Vertex AI Workbench¶
- Do nothing as you are already authenticated.
2. Local JupyterLab instance¶
- Uncomment and run code below:
# !gcloud auth login
3. For Colab (Recommended)¶
- If you are running this notebook on Google Colab, run the following cell to authenticate your environment.
# Colab authentication - This is to authenticate colab to your account and project.
import sys
if "google.colab" in sys.modules:
from google.colab import auth
auth.authenticate_user(project_id=PROJECT_ID)
print("Authenticated")
Set model name according to litellm syntax.
MODEL_NAME = "vertex_ai/gemini-2.0-flash-001" # @param {type:"string"}
Now lets see how we can enable Gemini in a CrewAI project. CrewAI uses litellm and we can use the a vertex_ai model name for each of our agents. We need to edit Agent config to change default LLM to Vertex Gemini.
here is an example:
reporting_analyst:
backstory: You're a meticulous analyst with a keen eye for detail. You're known
for your ability to turn complex data into clear and concise reports, making it
easy for others to understand and act on the information you provide.
goal: Create detailed reports based on {topic} data analysis and research findings
llm: vertex_ai/gemini-2.0-flash-001
We can define the LLM by editing the yaml in the editor, however we provide a script that does the same programatically.
Feel free to inspect the file under CREWAI_PROJECT_NAME/src/CREWAI_PROJECT_NAME/config/agents.yaml before and after the execution of the cell below
import yaml
agent_yaml = f"./src/{CREWAI_PROJECT_NAME}/config/agents.yaml"
with open(agent_yaml) as f:
agent_config = yaml.safe_load(f)
# This loop removes additional new line characters in the end of a text value
for k,v in agent_config.items():
for attribute,value in v.items():
if value.endswith("\n"):
v[attribute] = value[:-1]
# for each agent we add a key called llm and the model name of choice.
v['llm']=MODEL_NAME
with open(agent_yaml, "w") as f:
yaml.dump(agent_config, f)
print(f"file {agent_yaml} successfully updated!")
Running our crew demo¶
By default this demo allows you to rin a researhc on a topic of choice using 2 agents, a Senior Data Researcher that runs a research on a given topic and a Reporting Analyst that prepares a report using the findings from the Researcher.
Let's test our crew now that we have applied the changes. We will run it locally using the CLI.
Because Agents do multiple calls to the VertexAI Gemini API it is possible that some of the executions will run out of quotas. If you get RESOURCE_EXHAUSTED
error pause and try again after a minute.
!poetry run {CREWAI_PROJECT_NAME}
Preapare CrewAI interface for Reasoning Engine¶
Now that we know CrewAI works locally we will go ahead and prepare for reasoning engine deployment.
To be able to run CrewAI on Reasoning Engine we need to create a class that defines an __init__
, setup
and query
functions and crew_ai_app.py.
Below you can see what we are creating a crew_ai_app.py that can be used as our wrapper for reasoning engine deployment
Some highlights:¶
def set_up(self)
: We define what happens when our application starts. Depending on your implementation here you might want to initialise other libraries, set logging etc. In our simple example we only set the project id as an environment variable to optain the right permissions to resourses.CrewaiGcpCrew().crew().kickoff(inputs={"topic": question})
: runs the CrewAI for a given topic. The response should be returned as str
wrapper_file_content = ("""
from src.{PROJECT_NAME}.crew import {CLASS_NAME}Crew as CrewProject
from typing import Dict, List, Union
import vertexai
import os
class CrewAIApp:
def __init__(self, project: str, location: str) -> None:
self.project_id = project
self.location = location
def set_up(self) -> None:
os.environ['GOOGLE_CLOUD_PROJECT'] = self.project_id
return
def query(self, question: str) -> Union[str, List[Union[str, Dict]]]:
res = CrewProject().crew().kickoff(inputs={{"topic": question}})
return res.__str__()
""").format(PROJECT_NAME=CREWAI_PROJECT_NAME,
CLASS_NAME=''.join(word.title() for word in (CREWAI_PROJECT_NAME.split('_'))))
with open(f"crew_ai_app.py", "w") as f:
f.write(wrapper_file_content)
Test Wrapper locally¶
Now that we created our wrapper we need to ensure that it can run and trigger crewai.
from crew_ai_app import CrewAIApp
app = CrewAIApp(project=PROJECT_ID, location=LOCATION)
app.set_up()
response_c = app.query("AI")
Time to initialise VertexAI and deploy our crew to reasoning engine¶
import vertexai
from vertexai.preview import reasoning_engines
vertexai.init(project=PROJECT_ID, location=LOCATION, staging_bucket=STAGING_BUCKET)
Lets see existing engines on our project
reasoning_engine_list = reasoning_engines.ReasoningEngine.list()
print(reasoning_engine_list)
Reasoning engine instance needs to have the required libraries needed for crewai to execute successfully. As CrewAI uses poetry we will export the dependencies in a requirements.txt and process that to create the necessary reasoning engine requirements list
!poetry export --without-hashes --format=requirements.txt > requirements.txt \
# && pip install -r requirements.txt
with open('./requirements.txt') as f:
requirements = f.read().splitlines()
It's deployment time!¶
Deployment takes few minutes. Good time to grap a coffee! ☕
# Create a remote app with reasoning engine.
# This may take few minutes to finish.
from crew_ai_app import CrewAIApp
reasoning_engine = reasoning_engines.ReasoningEngine.create(
CrewAIApp(project=PROJECT_ID, location=LOCATION),
display_name="Demo Addition App",
description="A simple demo addition app",
requirements=requirements,
extra_packages=['./src','./crew_ai_app.py'],
)
Now the reasoning engine is deployed. You can access your reasoning engine in the future using the following reasource name:
print(reasoning_engine.resource_name)
Test if our Crew on reasoning engine instance can respond. Let's get a report on Henry VIII. You can rerun the CrewAI with different topics to see how the Agents respond.
response = reasoning_engine.query(question="Henry VIII")
print(response)
Cleanup¶
If you wish to delete the deployment from reasoning engine simply uncomment and run the following cell
#reasoning_engine.delete()