🌉ODSC AI West 2025Official Partner & Exhibitor
San FranciscoOct 28-30
Our ODSC Story
Global Optimization
SuperOptiX
Optimas
AI Frameworks

Optimas + SuperOptiX: Global-Reward Optimization for DSPy, CrewAI, AutoGen, and OpenAI Agents SDK

August 14, 2025
25 min read
By Shashi Jagtap
Optimas + SuperOptiX: Global-Reward Optimization for AI Agent Frameworks

📖 Read detailed version of this blog on your favorite platform

Choose your preferred platform to dive deeper

Listen to this post instead

Optimization has been central to SuperOptiX from day one regardless of whether it's prompts, weights, parameters, or compute. It began with DSPy-style programmatic prompt engineering and teleprompting as it was the only framework doing prompt optimization.

It was surprising that other frameworks couldn't figure out ways to optimize prompts like DSPy, but now we have a solution. Today, we're bringing Optimas into the SuperOptiX ecosystem so you can apply globally aligned local rewards across multiple frameworks: OpenAI Agent SDK, CrewAI, AutoGen, and DSPy.

You can check out the Optimas and SuperOptiX integration here.

Optimizing a single prompt isn't enough for modern "compound" AI systems. Real systems chain LLMs, tools, and traditional ML into multi-step workflows, and the right unit of optimization is the whole pipeline. Optimas introduces globally aligned local rewards that make per-component improvements reliably lift end-to-end performance.

SuperOptiX now brings Optimas to your existing agent stacks—OpenAI Agent SDK, CrewAI, AutoGen, and DSPy—behind one practical CLI, so you can go from baseline to optimized without changing frameworks.

What Optimas is (and why it matters)

Optimas is a unified optimization framework for compound AI systems. It provides:

Key Capabilities

  • Global Alignment: Learns an LRF per component that remains globally aligned, so local updates are safe and beneficial to the whole system
  • Heterogeneous Configuration: Supports prompts, hyperparameters, discrete choices like top-k, tool/model selection, and routing
  • Cross-Framework Support: Works across OpenAI Agent SDK, CrewAI, AutoGen, and DSPy through target adapters
  • Compound System Optimization: Works across multiple components and tools, not just single prompts
  • Multiple Optimizers: OPRO (single-iteration), MIPRO (multi-iteration), and COPRO (cooperative optimization)

What this unlocks

  • • Optimize prompts, hyperparameters, model parameters, and model routers across compound AI systems
  • • Run OPRO, MIPRO, and COPRO optimization loops using a single CLI workflow
  • • Keep your preferred agent stack and get consistent optimization behavior

Why this is impactful

  • Global-Local Alignment: Optimas learns a local reward function (LRF) for each component that stays aligned with a global objective
  • Data Efficiency: Independently maximizing a component's local reward still increases overall system quality
  • Heterogeneous Updates: Supports updates across prompts, hyperparameters, model selection/routing, and model parameters via RL
  • Proven Results: The Optimas paper reports an average relative improvement of 11.92% across five complex compound systems

Where Optimas fits in SuperOptiX

SuperOptiX, as the name suggests, is built around optimization. Optimas plugs directly into its lifecycle:

  1. 1Compile your agent into a runnable pipeline for a specific target
  2. 2Evaluate to get a baseline
  3. 3Optimize with Optimas (OPRO/MIPRO/COPRO) using the same CLI across targets
  4. 4Run the optimized agent

This extends optimization beyond prompts to hyperparameters, model selection, routing, and parameters where supported.

  • Focus-aligned: SuperOptiX is built around optimization; Optimas operationalizes optimization across agents and tools
  • Beyond prompts: SuperOptiX + Optimas can optimize prompts, hyperparameters, model parameters, and even model routers
  • One CLI to rule them all: Use a consistent sequence—compile, evaluate, optimize, run—across all targets

Installation

Install the core Optimas integration and target-specific extras as needed:

Bash
# Core Optimas integration
pip install "superoptix[optimas]"

# Target-specific extras (choose as needed)
pip install "superoptix[optimas,optimas-openai]"
pip install "superoptix[optimas,optimas-crewai]"
pip install "superoptix[optimas,optimas-autogen]"
pip install "superoptix[optimas,optimas-dspy]"

# CrewAI note: resolve dependency pin by installing CrewAI without deps, then json-repair
pip install crewai==0.157.0 --no-deps
pip install "json-repair>=0.30.0"

Note: CrewAI has a hard dependency on json-repair 0.26.0 while DSPy 3.0.0 needs it 0.30.0. The above installation resolves this conflict.

Quick Start Across All Targets

Set up your project and pull demo playbooks for each target:

Bash
# Initialize a new project
super init test_optimas
cd test_optimas

# Pull demo playbooks
super agent pull optimas_openai      # OpenAI SDK (recommended)
super agent pull optimas_crewai      # CrewAI
super agent pull optimas_autogen     # AutoGen
super agent pull optimas_dspy        # DSPy

OpenAI Agent SDK (Recommended for Production)

This is the fastest, most stable path to results with minimal dependency friction:

Bash
# Compile → Evaluate
super agent compile optimas_openai --target optimas-openai
super agent evaluate optimas_openai --engine optimas --target optimas-openai

# Optimize (OPRO shown; adjust search breadth, temperature, and timeout inline)
SUPEROPTIX_OPRO_NUM_CANDIDATES=3 \
SUPEROPTIX_OPRO_MAX_WORKERS=3 \
SUPEROPTIX_OPRO_TEMPERATURE=0.8 \
SUPEROPTIX_OPRO_COMPILE_TIMEOUT=120 \
super agent optimize optimas_openai --engine optimas --target optimas-openai --optimizer opro

# Run
super agent run optimas_openai --engine optimas --target optimas-openai \
  --goal "Write a Python function to add two numbers"

Why these knobs matter: Candidates broaden the search; temperature encourages variation; compile timeout helps for larger models; workers control concurrency for faster iterations.

CrewAI (Great for Role-Based Multi-Agent Workflows)

If you orchestrate crews of agents, Optimas can optimize prompts and task hyperparameters in the same loop:

Bash
# Compile → Evaluate
super agent compile optimas_crewai --target optimas-crewai
super agent evaluate optimas_crewai --engine optimas --target optimas-crewai

# Optimize (tune LiteLLM behavior; keep workers modest)
LITELLM_TIMEOUT=60 \
LITELLM_MAX_RETRIES=3 \
SUPEROPTIX_OPRO_MAX_WORKERS=3 \
super agent optimize optimas_crewai --engine optimas --target optimas-crewai --optimizer opro

# Run
super agent run optimas_crewai --engine optimas --target optimas-crewai \
  --goal "Write a Python function to calculate factorial"

Tip: Retries and timeouts harden long-running optimization loops against transient provider hiccups. For model client behavior, see LiteLLM.

AutoGen (Strong for Conversational/Multi-Agent; Optimization Can Be Slower)

AutoGen excels at complex, multi-turn agent interactions. Give the optimizer more headroom:

Bash
# Compile → Evaluate
super agent compile optimas_autogen --target optimas-autogen
super agent evaluate optimas_autogen --engine optimas --target optimas-autogen

# Optimize (increase compile timeout for heavier pipelines)
LITELLM_TIMEOUT=60 \
LITELLM_MAX_RETRIES=3 \
SUPEROPTIX_OPRO_MAX_WORKERS=3 \
SUPEROPTIX_OPRO_COMPILE_TIMEOUT=180 \
super agent optimize optimas_autogen --engine optimas --target optimas-autogen --optimizer opro

# Run
super agent run optimas_autogen --engine optimas --target optimas-autogen \
  --goal "Write a Python function to reverse a string"

Why timeouts help: Larger or tool-heavy pipelines can exceed quick compile windows; a higher timeout reduces spurious failures during candidate generation.

DSPy (Fully Supported)

DSPy is a natural fit if your system is authored in DSPy. Start with OPRO for reliability; try MIPRO for deeper prompt improvements:

Bash
# Compile → Evaluate
super agent compile optimas_dspy --target optimas-dspy
super agent evaluate optimas_dspy --engine optimas --target optimas-dspy

# Optimize (start with OPRO; adjust temperature/workers)
SUPEROPTIX_OPRO_MAX_WORKERS=3 \
SUPEROPTIX_OPRO_TEMPERATURE=0.8 \
super agent optimize optimas_dspy --engine optimas --target optimas-dspy --optimizer opro

# Run
super agent run optimas_dspy --engine optimas --target optimas-dspy \
  --goal "Write a Python function to calculate fibonacci numbers"

Note: If you see any concurrency/threading issues in your model client stack, set SUPEROPTIX_OPRO_MAX_WORKERS=1 during optimization to serialize candidate evaluations.

Watch the Demo

Conclusion

SuperOptiX is not limited to prompt optimization. Optimas unlocked power to go beyond prompt optimization and support existing frameworks. This is early stage for Optimas and we will evolve together as framework support gets stronger.

The integration of global-reward optimization across multiple AI agent frameworks represents a significant advancement in AI agent optimization and orchestration. This comprehensive optimization stack provides the foundation for building reliable, scalable, and optimizable AI agent systems.