Replies: 3 comments 9 replies
-
@HGF1KOR |
Beta Was this translation helpful? Give feedback.
-
AutoGen provides built-in support for multi-turn conversations by automatically preserving message history. When using a If you’re using a multi-agent structure and want to share context across different teams or sessions, then yes -you’re right- that’s where the Memory module becomes important:
Alternatively, if you want to enable multi-team coordination without using Memory, you can use the The |
Beta Was this translation helpful? Give feedback.
-
Based on other messages in this discussion, I think you need the following two functionalities:
Here are a couple of approaches that might help: 1. Simple Follow-up with ContextIn my experience, team.run_stream(task="first task")
# Output: TaskResult with some context
team.run_stream(task="follow up") # You just provide the follow-up user message as the new task ...the context is typically preserved internally, and the agents can continue the conversation just fine. You also mentioned in one of your earlier messages that you're using 2. Manual Manipulation of Messages (for token savings and fine-grained control)This is more useful when you want tighter control over the context—like pruning unnecessary messages, summarizing older content, or just being more token-efficient. As @SongChiYoung suggested earlier, this points to using the In the meantime, there's a workaround using team.run_stream(task="first task")
state = await team.save_state()
# The `state` object now contains a detailed snapshot of the chat history:
# {
# 'type': 'TeamState',
# 'version': '1.0.0',
# 'agent_states': {
# 'agent_1': {
# 'type': 'ChatAgentContainerState',
# 'version': '1.0.0',
# 'agent_state': {
# 'type': 'AssistantAgentState',
# 'version': '1.0.0',
# 'llm_context': {
# 'messages': [
# {'type': 'UserMessage', 'source': 'user', 'content': 'Let's have a conversation around quantum physics.'},
# {'type': 'UserMessage', 'source': 'agent_3', 'content': '...'},
# {'type': 'AssistantMessage', 'source': 'agent_2', 'content': '...'}
# ]
# }
# }
# }
# }
# }
# Now if you're planning to continue the session later, and want to reduce token usage...
# Step 1: Reset the team if you’re not disposing the running process
await team.reset()
# Step 2: Modify the state however you need — trim messages, summarize parts, etc.
# You can also save this to disk if needed:
# with open("team_state.json", "w") as f:
# json.dump(state, f)
# Step 3: Later (after N minutes/hours), when the user sends a follow-up:
# - Re-initialize your agents and team
team = ... # however you originally constructed it
# - Load the modified state back in
await team.load_state(state) # or load from disk
# Step 4: Start the next task using the follow-up
await team.run_stream(task="user follow-up") This gives you full control over what the agents see in their context window, and is super helpful if you’re running into token limits or want to be selective about historical messages. Let me know if I misunderstood any part of your question or if there's a specific detail you'd like me to revisit. |
Beta Was this translation helpful? Give feedback.
-
I have a question: What are the possible ways to enable multi-turn conversations between a user and a multi-agent system, such that the system is aware of the previous questions asked?
Beta Was this translation helpful? Give feedback.
All reactions