Tools with Dependency Injection
Tools Dependency Injection
Dependency Injection is a secure way to connect external functions to agents without exposing sensitive data such as passwords, tokens, or personal information. This approach ensures that sensitive information remains protected while still allowing agents to perform their tasks effectively, even when working with large language models (LLMs).
In this guide, we’ll explore how to build secure workflows that handle sensitive data safely.
As an example, we’ll create an agent that retrieves user’s account balance. The best part is that sensitive data like username and password are never shared with the LLM. Instead, it’s securely injected directly into the function at runtime, keeping it safe while maintaining seamless functionality.
Why Dependency Injection Is Essential
Here’s why dependency injection is a game-changer for secure LLM workflows:
- Enhanced Security: Your sensitive data is never directly exposed to the LLM.
- Simplified Development: Secure data can be seamlessly accessed by functions without requiring complex configurations.
- Unmatched Flexibility: It supports safe integration of diverse workflows, allowing you to scale and adapt with ease.
In this guide, we’ll explore how to set up dependency injection and build secure workflows. Let’s dive in!
Installation
To install AG2
, simply run the following command:
Imports
The functionality demonstrated in this guide is located in the
autogen.tools.dependency_injection
module. This module provides key
components for dependency injection:
BaseContext
: abstract base class used to define and encapsulate data contexts, such as user account information, which can then be injected into functions or agents securely.Depends
: a function used to declare and inject dependencies, either from a context (likeBaseContext
) or a function, ensuring sensitive data is provided securely without direct exposure.
Define a BaseContext Class
We start by defining a BaseContext
class for accounts. This will act
as the base structure for dependency injection. By using this approach,
sensitive information like usernames and passwords is never exposed to
the LLM.
Helper Functions
To ensure that the provided account is valid and retrieve its balance, we create two helper functions.
Agent Configuration
Configure the agents for the interaction.
config_list
defines the LLM configurations, including the model and API key.UserProxyAgent
simulates user inputs without requiring actual human interaction (set toNEVER
).AssistantAgent
represents the AI agent, configured with the LLM settings.
Injecting a BaseContext Parameter
In the example below we register the function and use dependency
injection to automatically inject the bob_account Account object into
the function. This account
parameter will not be visible to the LLM.
Note: You can also use account: Account = Depends(bob_account)
as
an alternative syntax.
Finally, we initiate a chat to retrieve the balance.
Injecting Parameters Without BaseContext
Sometimes, you might not want to use BaseContext
. Here’s how to inject
simple parameters directly.
Agent Configuration
Configure the agents for the interaction.
config_list
defines the LLM configurations, including the model and API key.UserProxyAgent
simulates user inputs without requiring actual human interaction (set toNEVER
).AssistantAgent
represents the AI agent, configured with the LLM settings.
Register the Function with Direct Parameter Injection
Instead of injecting a full context like Account
, you can directly
inject individual parameters, such as the username and password, into a
function. This allows for more granular control over the data injected
into the function, and still ensures that sensitive information is
managed securely.
Here’s how you can set it up:
Initiate the Chat
As before, initiate a chat to test the function.
Aligning Contexts to Agents
You can match specific dependencies, such as 3rd party system credentials, with specific agents by using tools with dependency injection.
In this example we have 2 external systems and have 2 related login credentials. We don’t want or need the LLM to be aware of these credentials.