AgentScout

AI Agent Framework Ecosystem Consolidation: Developer Preference Shifts in 2026

PyPI download data reveals OpenAI Agents SDK rapidly catching LangGraph. AutoGen and Swarm deprecation signals consolidation around 3-4 major frameworks. MCP becomes interoperability standard.

AgentScout · · · 14 min read
#ai-agents #langgraph #openai #crewai #mcp #framework-consolidation
Analyzing Data Nodes...
SIG_CONF:CALCULATING
Verified Sources

TL;DR

The AI agent framework ecosystem is consolidating around 3-4 major players. PyPI download data reveals OpenAI Agents SDK reached 18.2 million monthly downloads within one year of launch—44% of LangGraph’s 41.1 million. AutoGen’s maintenance mode and Swarm’s deprecation signal the end of the early experimentation phase. MCP has emerged as the de facto interoperability standard with 81,621 stars on its official servers repository.

Executive Summary

The AI agent framework landscape in 2026 shows clear signs of market maturation and consolidation. While GitHub stars have long been used as a proxy for framework popularity, PyPI download data tells a fundamentally different story about actual developer adoption. The gap between social proof metrics and real-world usage has never been more pronounced.

Three key findings define this analysis:

First, OpenAI Agents SDK has achieved remarkable velocity—18.2 million monthly downloads within twelve months of launch, representing 44% of LangGraph’s 41.1 million monthly volume. This trajectory suggests rapid capture of the OpenAI ecosystem developer base. The framework’s provider-agnostic design supporting 100+ LLMs through LiteLLM integration broadens its appeal beyond the core OpenAI audience, positioning it as a serious contender for framework leadership.

Second, legacy frameworks are exiting the competitive landscape. OpenAI Swarm is officially deprecated with its README directing users to Agents SDK. Microsoft’s AutoGen, despite 55,930 GitHub stars (second highest among all frameworks), entered maintenance mode in September 2025 with commit activity approaching zero. These exits signal the transition from experimentation to production-focused development.

Third, Model Context Protocol (MCP) has become the interoperability layer across all major frameworks. The official MCP servers repository accumulated 81,621 stars, and every major framework—LangGraph, CrewAI, AutoGen, and OpenAI Agents SDK—now supports MCP integration natively. This standardization reduces framework lock-in and enables cross-framework agent collaboration.

The implications extend beyond framework selection. Enterprise architecture decisions made today will shape agent infrastructure for years. Developers and organizations currently building on deprecated or maintenance-mode frameworks face migration decisions within 6-18 months, with associated costs and risks. The consolidation also creates opportunities for tooling, observability, and management solutions that operate across frameworks.

Background & Context

The Agent Framework Gold Rush (2023-2024)

The AI agent framework ecosystem experienced explosive growth starting in mid-2023. The release of GPT-4 and the growing sophistication of language models created demand for structured approaches to building multi-step, multi-agent applications. Before this period, developers built agents ad-hoc—chaining API calls, managing state manually, and implementing custom orchestration logic.

LangGraph launched in August 2023 as LangChain’s answer to stateful agent orchestration. The framework introduced graph-based workflows where nodes represent processing steps and edges define conditional transitions. Key innovations included checkpoint-based state persistence, human-in-the-loop interrupts, and durable execution—features essential for production deployments but absent from simpler frameworks.

Microsoft’s AutoGen emerged from research labs the same month. Developed by Microsoft Research, AutoGen emphasized multi-agent conversation patterns where agents communicate through structured dialogues. The framework attracted academic interest and found use cases in research prototyping, though enterprise production adoption remained limited.

CrewAI followed in October 2023 with a focus on simplicity and team-based agent collaboration. The framework introduced the “Crew” abstraction—a group of agents working together on tasks—and positioned itself as the framework for developers who wanted to build agents without deep expertise in graph theory or state machines. Its independence from LangChain appealed to teams wary of vendor dependency.

OpenAI entered the fray in February 2024 with Swarm—an educational, experimental framework designed to demonstrate multi-agent patterns. The framework attracted significant attention (21,197 GitHub stars) but was explicitly positioned as non-production-ready. Swarm served as a proof of concept for lightweight agent handoffs rather than a foundation for enterprise systems.

By late 2024, the ecosystem featured over a dozen competing frameworks, each with different philosophies: low-level control (LangGraph), high-level abstraction (CrewAI), research-oriented (AutoGen), and experimental (Swarm). New entrants included PydanticAI (type-safe agents with Pydantic validation), Google ADK (Google Cloud integration), and various specialized frameworks. The fragmentation created decision paralysis for teams evaluating options.

The Turning Point: 2025

Three events in 2025 fundamentally reshaped the competitive landscape:

March 2025: OpenAI released Agents SDK as Swarm’s production-ready successor. The framework featured provider-agnostic support for 100+ LLMs, built-in guardrails, human-in-the-loop capabilities, and native tracing. This marked OpenAI’s serious entry into the framework market—a shift from educational experimentation to production tooling. The timing aligned with growing enterprise demand for agent infrastructure.

November 2024 - January 2025: Anthropic’s Model Context Protocol gained mainstream adoption. The specification for connecting AI assistants to data sources and tools became the de facto standard for framework interoperability. MCP’s rapid adoption across all major frameworks reflected pent-up demand for standardization in the tool integration layer.

September 2025: Microsoft announced AutoGen’s maintenance mode, recommending new users adopt Microsoft Agent Framework instead. This signaled the end of Microsoft’s research-focused framework experiment and acknowledged that production frameworks require sustained investment beyond research team capacity.

Simultaneously, the Astral team—creators of uv (81,569 stars) and ruff (46,510 stars)—joined OpenAI’s Codex team. The uv package manager and ruff linter are now integrated into OpenAI’s development workflow, with Agents SDK explicitly acknowledging their contributions. This consolidation of Python tooling talent within OpenAI strengthens the ecosystem around Agents SDK.

Current State: 2026

The framework ecosystem now shows clear stratification across multiple dimensions: adoption velocity, enterprise readiness, active development, and community support.

FrameworkGitHub StarsPyPI Monthly DownloadsStatus
AutoGen55,930404KMaintenance Mode
CrewAI46,6695.77MActive Development
LangGraph26,98041.1MActive Development
OpenAI Swarm21,197N/A (Deprecated)Deprecated
OpenAI Agents SDK20,15318.2MActive Development
Google ADK18,491N/AActive Development
PydanticAI15,613N/AActive Development

The discrepancy between GitHub stars and PyPI downloads reveals the core thesis: social proof metrics have become unreliable indicators of actual adoption. Frameworks like AutoGen accumulated stars during early hype cycles but failed to translate that attention into sustained production use.

Analysis Dimension 1: Market Reality Check — GitHub Stars vs. PyPI Downloads

The Star Count Illusion

GitHub stars function as a social bookmarking mechanism—users star repositories to save them for later reference, signal interest, or support projects. Stars accumulate over time and rarely decrease, creating a cumulative metric that reflects historical interest rather than current relevance. Stars do not correlate directly with production usage, dependency inclusion, or active development.

The AutoGen anomaly illustrates this clearly. With 55,930 GitHub stars, AutoGen ranks highest among all agent frameworks. Yet its PyPI monthly downloads stand at 404,054—less than 1% of LangGraph’s volume and only 2.2% of OpenAI Agents SDK’s downloads despite having 2.8x more stars. The framework accumulated stars during its 2023-2024 research prominence but failed to convert that attention into production adoption.

The commit activity data confirms the disconnect. Over the past 12 weeks, AutoGen averaged near-zero commits, with only sporadic security patches. The last significant release (v0.7.5) was in September 2025. The repository’s README now explicitly states:

“Important: if you are new to AutoGen, please checkout Microsoft Agent Framework. AutoGen will still be maintained and continue to receive bug fixes and critical security patches.”

This pattern—high star count with low active adoption—appears across software ecosystems. Early excitement generates social proof, but production requirements (reliability, support, documentation, enterprise features) determine sustained adoption. Agent frameworks face particularly high production barriers: agents interact with external systems, make autonomous decisions, and require debugging capabilities that educational or research frameworks rarely provide.

PyPI Downloads as the Adoption Signal

PyPI download statistics capture actual package installations—developers adding frameworks to projects, CI/CD pipelines pulling dependencies, and production systems deploying agents. While not perfect (downloads include automated processes and version updates), the metric correlates more closely with real-world usage than GitHub stars.

The data reveals a different hierarchy:

Tier 1 (Enterprise Production): LangGraph leads with 41.1 million monthly downloads and 1.67 million daily downloads. Enterprise customers include Klarna, Replit, and Elastic—companies requiring stateful execution, durable workflows, and comprehensive observability through LangSmith. The framework’s low-level control appeals to teams building complex, production-critical agent systems where debugging and monitoring are essential.

Tier 2 (Rapid Growth): OpenAI Agents SDK shows the fastest growth trajectory at 18.2 million monthly downloads (771K daily). Released only in March 2025, the framework achieved nearly half of LangGraph’s volume within twelve months. The provider-agnostic design supporting 100+ LLMs via LiteLLM broadens its appeal beyond the OpenAI ecosystem. For OpenAI API users, the framework offers native integration with OpenAI’s model capabilities including realtime voice agents.

Tier 3 (Mid-Market): CrewAI occupies a distinct niche with 5.77 million monthly downloads. Its positioning as a LangChain-independent framework with high-level abstractions (Crews and Flows) appeals to teams prioritizing simplicity over granular control. The 100,000+ certified developers through learn.crewai.com indicate strong community engagement and educational investment. CrewAI’s enterprise offering (AMP Suite) provides control plane capabilities for larger organizations.

Tier 4 (Legacy/Transitional): AutoGen at 404K monthly downloads represents users yet to migrate from the maintenance-mode framework. Microsoft’s recommendation to transition to Microsoft Agent Framework suggests this user base will fragment over the next 12-18 months. The download volume indicates a non-trivial installed base, but the downward trajectory is clear.

Developer Behavior Patterns

Stack Overflow question volumes provide additional insight into framework complexity and adoption challenges:

FrameworkStack Overflow QuestionsInterpretation
LangGraph193Higher complexity, more debugging needs
CrewAI58Lower complexity, better abstractions
AutoGen0 (no tag)Minimal active community

LangGraph’s 193 questions versus CrewAI’s 58 reflects two phenomena: LangGraph’s steeper learning curve and its broader production deployment requiring debugging support. Teams using LangGraph in production encounter edge cases, state management complexities, and integration challenges that generate support questions. The absence of an AutoGen tag on Stack Overflow indicates limited active community engagement—users have either migrated or operate in isolation.

Enterprise Adoption Patterns

Enterprise customers reveal distinct framework preferences based on use case complexity:

LangGraph Enterprise Customers: Klarna (financial services agent), Replit (code generation agents), Elastic (observability agents). These companies require stateful execution, audit trails, and sophisticated debugging—LangGraph’s core strengths.

CrewAI Enterprise Customers: Organizations with AMP Suite deployments. CrewAI’s simplicity appeals to teams building agents without dedicated infrastructure expertise.

OpenAI Agents SDK: Growing adoption among OpenAI API customers. The framework’s integration with OpenAI’s model capabilities (including structured outputs and function calling) provides a natural on-ramp for existing OpenAI users.

Analysis Dimension 2: Framework Deprecation Signals and Migration Pressure

The Swarm Deprecation Model

OpenAI’s handling of Swarm provides a template for framework lifecycle management. The repository, now archived with 21,197 stars, displays an unambiguous message in its README:

“Swarm is now replaced by the OpenAI Agents SDK, which is a production-ready evolution of Swarm. We recommend migrating to the Agents SDK for all production use cases.”

The last commit activity occurred in March 2025. No further development is planned. The framework served its purpose as an educational tool demonstrating multi-agent patterns, but OpenAI’s strategic direction clearly favors Agents SDK.

For the 21,197 users who starred Swarm, the migration path is well-defined: transition to Agents SDK, which maintains conceptual similarity while adding production features like guardrails, sessions, and tracing. The conceptual continuity—both frameworks use similar handoff patterns—reduces migration friction.

The AutoGen Transition Challenge

Microsoft’s AutoGen presents a more complex migration scenario. Unlike Swarm’s educational positioning, AutoGen was used in research and some production contexts. The 55,930 stargazers and 404K monthly downloaders face a less clear transition path.

Microsoft recommends Microsoft Agent Framework as the successor, but this framework remains less documented and less battle-tested than alternatives like LangGraph or Agents SDK. The Microsoft Agent Framework GitHub repository shows active development but lacks the enterprise case studies and community knowledge base of more established frameworks.

This creates migration uncertainty with four viable options:

Option 1: Migrate to Microsoft Agent Framework within the Microsoft ecosystem. Best for Azure-centric organizations with existing Microsoft relationships. Risk: framework maturity and documentation gaps.

Option 2: Migrate to LangGraph for production-grade stateful execution with LangSmith observability. Best for complex production use cases requiring debugging and monitoring. Risk: steeper learning curve, LangChain ecosystem dependency.

Option 3: Migrate to OpenAI Agents SDK for simpler API and multi-LLM support. Best for OpenAI API users or teams needing LLM flexibility. Risk: newer framework with less enterprise track record.

Option 4: Migrate to CrewAI for high-level abstractions and independence from LangChain. Best for teams prioritizing simplicity and rapid development. Risk: less control over low-level execution details.

The presence of four viable alternatives suggests the AutoGen user base will fragment across frameworks rather than consolidate on a single successor. This fragmentation benefits no single framework but expands the overall agent development market.

Timeline Pressure

Organizations using AutoGen or Swarm face a finite migration window:

FrameworkLast Significant UpdateMaintenance StatusMigration Window
SwarmMarch 2025DeprecatedImmediate
AutoGenSeptember 2025Maintenance Mode12-18 months

Security vulnerabilities discovered in deprecated frameworks receive slower patches. CrewAI’s recent XXE vulnerability (Issue #4967) demonstrated the speed of active framework response—the fix shipped within days. A similar vulnerability in Swarm would likely remain unpatched, creating risk for remaining users.

The security implications extend beyond patching velocity. Active frameworks benefit from community security reviews, automated scanning in CI/CD pipelines, and coordinated disclosure processes. Deprecated frameworks lack these protections, making them increasingly risky for production use.

Commit Activity as a Leading Indicator

GitHub commit activity patterns provide early warning for framework health:

FrameworkWeekly Commits (12-week avg)Trajectory
CrewAI20-25Very Active
LangGraphActive, weekly releasesStable
OpenAI Agents SDKFrequent releasesGrowing
AutoGenNear zeroDeclining
SwarmNoneDeprecated

CrewAI’s 20-25 weekly commits indicate the highest development velocity among active frameworks. This intensity reflects CrewAI’s position as a newer framework rapidly adding features to compete with established players. LangGraph maintains steady progress with weekly releases, indicating mature development practices. OpenAI Agents SDK shows frequent release cadence consistent with its growth phase.

Analysis Dimension 3: MCP Standardization and Interoperability

The Rise of Model Context Protocol

Anthropic’s Model Context Protocol (MCP), announced in November 2024, has become the dominant standard for AI agent tool integration. The statistics are striking:

  • Official MCP servers repository: 81,621 GitHub stars
  • MCP Python SDK: 22,229 stars
  • Related repositories: 18,159 search results for “mcp”

MCP solves a fundamental problem in agent development: tool and data source integration. Before MCP, each framework implemented proprietary tool interfaces. LangGraph tools didn’t work with CrewAI. OpenAI function calling formats differed from LangChain tool schemas. This fragmentation created vendor lock-in and duplicated effort across the ecosystem.

MCP provides a universal specification for:

  1. Tool Discovery: Agents can discover available tools dynamically through capability advertisements
  2. Resource Access: Standardized interfaces for databases, APIs, and files with consistent authentication patterns
  3. Prompt Templates: Shared prompt libraries across frameworks, reducing duplication

The protocol’s rapid adoption reflects strong industry demand for standardization. Within months of release, every major framework announced MCP support. This velocity suggests MCP addresses real pain points that the ecosystem had been working around rather than solving.

Framework MCP Support Matrix

Every major framework now supports MCP integration:

FrameworkMCP IntegrationImplementation
LangGraphNativelanggraph-mcp-agents adapter (690 stars)
CrewAIEnterpriseenterprise-mcp-server
AutoGenNativeMcpWorkbench class
OpenAI Agents SDKNativetools via MCP
PydanticAINativeBuilt-in support
Google ADKNativeBuilt-in support

The rapid adoption of MCP across all frameworks indicates strong industry demand for interoperability. Developers no longer need to choose frameworks based on tool availability—any MCP-compatible tool works with any framework. This standardization reduces switching costs and enables gradual migration between frameworks.

Enterprise MCP Ecosystem

Enterprise adoption has spawned a secondary ecosystem of MCP management tools:

ToolHive (1,662 stars): Enterprise-grade MCP server management with security, monitoring, and governance features. Provides centralized control over MCP tool access, audit logging, and compliance reporting.

MCP Gateway Registry (505 stars): OAuth authentication and unified access layer for MCP servers. Addresses the authentication gap in the core MCP specification by providing standardized auth flows.

These tools address enterprise requirements: audit logging, access control, and compliance—features absent from the core MCP specification but essential for production deployment. The emergence of this ecosystem indicates MCP is moving beyond early adopters into enterprise environments.

Security Considerations

MCP adoption introduces new attack surfaces. Two notable vulnerabilities emerged in 2025:

MCP Tool Poisoning (AutoGen Issue #7427): Unsigned tool definitions can enable arbitrary code execution. Malicious actors could inject tool definitions that execute harmful code when invoked. Frameworks are implementing tool signing and verification mechanisms to address this.

MCP Authentication Gap (AutoGen Issue #7403): MCP tool integration lacked per-message authentication or integrity verification. Tool invocations could be intercepted or modified in transit. Enterprise MCP gateways now provide OAuth-based authentication to mitigate this risk.

Organizations deploying MCP in production should evaluate these security implications and implement appropriate mitigations. The MCP security model continues to evolve as the community identifies and addresses vulnerabilities.

Key Data Points

MetricLangGraphCrewAIAutoGenOpenAI Agents SDKSource
GitHub Stars26,98046,66955,93020,153GitHub API
PyPI Monthly Downloads41.1M5.77M404K18.2MPyPI Stats
PyPI Daily Downloads1.67M210K15.5K772KPyPI Stats
Open Issues456421687N/AGitHub API
Development StatusActiveVery ActiveMaintenanceActiveGitHub
MCP SupportNativeEnterpriseNativeNativeDocumentation
Enterprise CustomersKlarna, Replit, ElasticAMP SuiteLimitedGrowingOfficial Sites
Learning CurveSteepModerateModerateLowCommunity
Stack Overflow Questions193580N/AStack Exchange
MetricValueSourceDate
MCP Servers Repo Stars81,621GitHub2026-03-20
MCP Python SDK Stars22,229GitHub2026-03-20
CrewAI Certified Developers100,000+CrewAI Official2026-03
Swarm Last Commit2025-03GitHub2025-03
AutoGen Last Major Releasev0.7.5GitHub2025-09
uv (Astral) Stars81,569GitHub2026-03-20
ruff (Astral) Stars46,510GitHub2026-03-20

🔺 Scout Intel: What Others Missed

Confidence: high | Novelty Score: 78/100

The most consequential shift in the agent framework ecosystem is not the rise of any single framework, but the decoupling of tool integration from framework choice. MCP’s emergence as a universal standard means that LangGraph, CrewAI, and OpenAI Agents SDK can now access identical tool ecosystems—a fundamental change from the fragmented landscape of 2023-2024.

Consider the strategic implications: when OpenAI Agents SDK launched, analysts focused on its feature parity with LangGraph and competition for developer mindshare. The deeper signal was OpenAI’s decision to build MCP support from day one. By doing so, OpenAI conceded that it would not build a proprietary tool ecosystem—the real battleground shifted from framework lock-in to orchestration quality, observability depth, and developer experience.

The evidence supports this interpretation. The MCP servers repository (81,621 stars) has accumulated more social proof than any individual framework repository. Enterprise tooling like ToolHive and MCP Gateway Registry addresses production requirements that frameworks themselves avoided. The AutoGen community’s RFC for AMP (Agent Message Protocol) in Issue #7415 reveals appetite for even deeper standardization—cross-framework agent discovery that would enable LangGraph agents to invoke CrewAI agents transparently.

Key Implication: Organizations should evaluate frameworks based on orchestration capabilities, observability integrations, and team expertise—not tool availability. The MCP layer has commoditized tool access, making framework selection a question of workflow complexity and operational requirements rather than ecosystem breadth.

Outlook & Predictions

Near-term (0-6 months)

AutoGen Migration Wave: Organizations using AutoGen will begin migration planning. Expect increased activity on migration guides and tooling. Microsoft Agent Framework documentation will improve to capture this user base, but migration will fragment across multiple frameworks rather than consolidate on Microsoft’s successor.

OpenAI Agents SDK Growth: Monthly downloads will likely exceed 25 million, approaching 60% of LangGraph’s volume. OpenAI’s distribution advantage (bundling with OpenAI API access, integration with OpenAI models) accelerates adoption. The framework’s appeal extends beyond OpenAI loyalists through its multi-LLM support.

MCP Security Hardening: Major frameworks will implement tool signing and verification to address tool poisoning vulnerabilities. Enterprise MCP gateways will become standard for production deployments. The MCP security model will mature rapidly as enterprise adoption exposes gaps.

Confidence Level: High. All three predictions follow from current trends and announced deprecations.

Medium-term (6-18 months)

Framework Consolidation: The viable framework landscape will consolidate to 3-4 major players: LangGraph (enterprise complex workflows), OpenAI Agents SDK (OpenAI ecosystem), CrewAI (rapid development, SMB market), and potentially Google ADK (Google Cloud customers). Smaller frameworks will either specialize for niche use cases or fade.

Microsoft Agent Framework Positioning: Microsoft will invest significantly in Microsoft Agent Framework to retain AutoGen users within the Azure ecosystem. Success depends on documentation quality, Azure integration depth, and enterprise support commitments. Competitive pressure from LangGraph and Agents SDK will intensify.

Cross-Framework Agent Collaboration: MCP will enable agents from different frameworks to collaborate. A LangGraph orchestrator might invoke CrewAI agents for specific tasks. This interoperability layer reduces framework lock-in and enables hybrid architectures.

Confidence Level: Medium-High. Consolidation is already visible; cross-framework collaboration depends on MCP maturity and framework support.

Long-term (18+ months)

Commoditization of Orchestration: Agent orchestration becomes a commodity. Frameworks differentiate on observability, enterprise features, and ecosystem integrations rather than core orchestration capabilities. The MCP layer abstracts away tool integration differences.

Emergence of Agent Runtimes: Platform-agnostic agent runtimes (like agent-kernel with 25 stars currently) may gain traction, allowing framework-agnostic deployment similar to how container runtimes standardized application deployment. These runtimes would handle execution, scaling, and monitoring across framework implementations.

Standardization of Agent Interfaces: MCP and similar protocols will evolve into formal standards (potentially under ISO/IEEE or W3C), enabling true plug-and-play agent ecosystems. The industry will converge on common patterns for agent discovery, invocation, and monitoring.

Confidence Level: Medium. Long-term predictions depend on market evolution and competitive dynamics not yet visible.

Key Trigger to Watch

PyPI Download Parity: If OpenAI Agents SDK monthly downloads exceed LangGraph within the next 12 months, it signals OpenAI’s ecosystem dominance in the agent framework space. Monitor PyPI Stats API for download trends. The crossover point would validate OpenAI’s strategy of building a production framework on Swarm’s conceptual foundation while extending it with enterprise features.

Sources

AI Agent Framework Ecosystem Consolidation: Developer Preference Shifts in 2026

PyPI download data reveals OpenAI Agents SDK rapidly catching LangGraph. AutoGen and Swarm deprecation signals consolidation around 3-4 major frameworks. MCP becomes interoperability standard.

AgentScout · · · 14 min read
#ai-agents #langgraph #openai #crewai #mcp #framework-consolidation
Analyzing Data Nodes...
SIG_CONF:CALCULATING
Verified Sources

TL;DR

The AI agent framework ecosystem is consolidating around 3-4 major players. PyPI download data reveals OpenAI Agents SDK reached 18.2 million monthly downloads within one year of launch—44% of LangGraph’s 41.1 million. AutoGen’s maintenance mode and Swarm’s deprecation signal the end of the early experimentation phase. MCP has emerged as the de facto interoperability standard with 81,621 stars on its official servers repository.

Executive Summary

The AI agent framework landscape in 2026 shows clear signs of market maturation and consolidation. While GitHub stars have long been used as a proxy for framework popularity, PyPI download data tells a fundamentally different story about actual developer adoption. The gap between social proof metrics and real-world usage has never been more pronounced.

Three key findings define this analysis:

First, OpenAI Agents SDK has achieved remarkable velocity—18.2 million monthly downloads within twelve months of launch, representing 44% of LangGraph’s 41.1 million monthly volume. This trajectory suggests rapid capture of the OpenAI ecosystem developer base. The framework’s provider-agnostic design supporting 100+ LLMs through LiteLLM integration broadens its appeal beyond the core OpenAI audience, positioning it as a serious contender for framework leadership.

Second, legacy frameworks are exiting the competitive landscape. OpenAI Swarm is officially deprecated with its README directing users to Agents SDK. Microsoft’s AutoGen, despite 55,930 GitHub stars (second highest among all frameworks), entered maintenance mode in September 2025 with commit activity approaching zero. These exits signal the transition from experimentation to production-focused development.

Third, Model Context Protocol (MCP) has become the interoperability layer across all major frameworks. The official MCP servers repository accumulated 81,621 stars, and every major framework—LangGraph, CrewAI, AutoGen, and OpenAI Agents SDK—now supports MCP integration natively. This standardization reduces framework lock-in and enables cross-framework agent collaboration.

The implications extend beyond framework selection. Enterprise architecture decisions made today will shape agent infrastructure for years. Developers and organizations currently building on deprecated or maintenance-mode frameworks face migration decisions within 6-18 months, with associated costs and risks. The consolidation also creates opportunities for tooling, observability, and management solutions that operate across frameworks.

Background & Context

The Agent Framework Gold Rush (2023-2024)

The AI agent framework ecosystem experienced explosive growth starting in mid-2023. The release of GPT-4 and the growing sophistication of language models created demand for structured approaches to building multi-step, multi-agent applications. Before this period, developers built agents ad-hoc—chaining API calls, managing state manually, and implementing custom orchestration logic.

LangGraph launched in August 2023 as LangChain’s answer to stateful agent orchestration. The framework introduced graph-based workflows where nodes represent processing steps and edges define conditional transitions. Key innovations included checkpoint-based state persistence, human-in-the-loop interrupts, and durable execution—features essential for production deployments but absent from simpler frameworks.

Microsoft’s AutoGen emerged from research labs the same month. Developed by Microsoft Research, AutoGen emphasized multi-agent conversation patterns where agents communicate through structured dialogues. The framework attracted academic interest and found use cases in research prototyping, though enterprise production adoption remained limited.

CrewAI followed in October 2023 with a focus on simplicity and team-based agent collaboration. The framework introduced the “Crew” abstraction—a group of agents working together on tasks—and positioned itself as the framework for developers who wanted to build agents without deep expertise in graph theory or state machines. Its independence from LangChain appealed to teams wary of vendor dependency.

OpenAI entered the fray in February 2024 with Swarm—an educational, experimental framework designed to demonstrate multi-agent patterns. The framework attracted significant attention (21,197 GitHub stars) but was explicitly positioned as non-production-ready. Swarm served as a proof of concept for lightweight agent handoffs rather than a foundation for enterprise systems.

By late 2024, the ecosystem featured over a dozen competing frameworks, each with different philosophies: low-level control (LangGraph), high-level abstraction (CrewAI), research-oriented (AutoGen), and experimental (Swarm). New entrants included PydanticAI (type-safe agents with Pydantic validation), Google ADK (Google Cloud integration), and various specialized frameworks. The fragmentation created decision paralysis for teams evaluating options.

The Turning Point: 2025

Three events in 2025 fundamentally reshaped the competitive landscape:

March 2025: OpenAI released Agents SDK as Swarm’s production-ready successor. The framework featured provider-agnostic support for 100+ LLMs, built-in guardrails, human-in-the-loop capabilities, and native tracing. This marked OpenAI’s serious entry into the framework market—a shift from educational experimentation to production tooling. The timing aligned with growing enterprise demand for agent infrastructure.

November 2024 - January 2025: Anthropic’s Model Context Protocol gained mainstream adoption. The specification for connecting AI assistants to data sources and tools became the de facto standard for framework interoperability. MCP’s rapid adoption across all major frameworks reflected pent-up demand for standardization in the tool integration layer.

September 2025: Microsoft announced AutoGen’s maintenance mode, recommending new users adopt Microsoft Agent Framework instead. This signaled the end of Microsoft’s research-focused framework experiment and acknowledged that production frameworks require sustained investment beyond research team capacity.

Simultaneously, the Astral team—creators of uv (81,569 stars) and ruff (46,510 stars)—joined OpenAI’s Codex team. The uv package manager and ruff linter are now integrated into OpenAI’s development workflow, with Agents SDK explicitly acknowledging their contributions. This consolidation of Python tooling talent within OpenAI strengthens the ecosystem around Agents SDK.

Current State: 2026

The framework ecosystem now shows clear stratification across multiple dimensions: adoption velocity, enterprise readiness, active development, and community support.

FrameworkGitHub StarsPyPI Monthly DownloadsStatus
AutoGen55,930404KMaintenance Mode
CrewAI46,6695.77MActive Development
LangGraph26,98041.1MActive Development
OpenAI Swarm21,197N/A (Deprecated)Deprecated
OpenAI Agents SDK20,15318.2MActive Development
Google ADK18,491N/AActive Development
PydanticAI15,613N/AActive Development

The discrepancy between GitHub stars and PyPI downloads reveals the core thesis: social proof metrics have become unreliable indicators of actual adoption. Frameworks like AutoGen accumulated stars during early hype cycles but failed to translate that attention into sustained production use.

Analysis Dimension 1: Market Reality Check — GitHub Stars vs. PyPI Downloads

The Star Count Illusion

GitHub stars function as a social bookmarking mechanism—users star repositories to save them for later reference, signal interest, or support projects. Stars accumulate over time and rarely decrease, creating a cumulative metric that reflects historical interest rather than current relevance. Stars do not correlate directly with production usage, dependency inclusion, or active development.

The AutoGen anomaly illustrates this clearly. With 55,930 GitHub stars, AutoGen ranks highest among all agent frameworks. Yet its PyPI monthly downloads stand at 404,054—less than 1% of LangGraph’s volume and only 2.2% of OpenAI Agents SDK’s downloads despite having 2.8x more stars. The framework accumulated stars during its 2023-2024 research prominence but failed to convert that attention into production adoption.

The commit activity data confirms the disconnect. Over the past 12 weeks, AutoGen averaged near-zero commits, with only sporadic security patches. The last significant release (v0.7.5) was in September 2025. The repository’s README now explicitly states:

“Important: if you are new to AutoGen, please checkout Microsoft Agent Framework. AutoGen will still be maintained and continue to receive bug fixes and critical security patches.”

This pattern—high star count with low active adoption—appears across software ecosystems. Early excitement generates social proof, but production requirements (reliability, support, documentation, enterprise features) determine sustained adoption. Agent frameworks face particularly high production barriers: agents interact with external systems, make autonomous decisions, and require debugging capabilities that educational or research frameworks rarely provide.

PyPI Downloads as the Adoption Signal

PyPI download statistics capture actual package installations—developers adding frameworks to projects, CI/CD pipelines pulling dependencies, and production systems deploying agents. While not perfect (downloads include automated processes and version updates), the metric correlates more closely with real-world usage than GitHub stars.

The data reveals a different hierarchy:

Tier 1 (Enterprise Production): LangGraph leads with 41.1 million monthly downloads and 1.67 million daily downloads. Enterprise customers include Klarna, Replit, and Elastic—companies requiring stateful execution, durable workflows, and comprehensive observability through LangSmith. The framework’s low-level control appeals to teams building complex, production-critical agent systems where debugging and monitoring are essential.

Tier 2 (Rapid Growth): OpenAI Agents SDK shows the fastest growth trajectory at 18.2 million monthly downloads (771K daily). Released only in March 2025, the framework achieved nearly half of LangGraph’s volume within twelve months. The provider-agnostic design supporting 100+ LLMs via LiteLLM broadens its appeal beyond the OpenAI ecosystem. For OpenAI API users, the framework offers native integration with OpenAI’s model capabilities including realtime voice agents.

Tier 3 (Mid-Market): CrewAI occupies a distinct niche with 5.77 million monthly downloads. Its positioning as a LangChain-independent framework with high-level abstractions (Crews and Flows) appeals to teams prioritizing simplicity over granular control. The 100,000+ certified developers through learn.crewai.com indicate strong community engagement and educational investment. CrewAI’s enterprise offering (AMP Suite) provides control plane capabilities for larger organizations.

Tier 4 (Legacy/Transitional): AutoGen at 404K monthly downloads represents users yet to migrate from the maintenance-mode framework. Microsoft’s recommendation to transition to Microsoft Agent Framework suggests this user base will fragment over the next 12-18 months. The download volume indicates a non-trivial installed base, but the downward trajectory is clear.

Developer Behavior Patterns

Stack Overflow question volumes provide additional insight into framework complexity and adoption challenges:

FrameworkStack Overflow QuestionsInterpretation
LangGraph193Higher complexity, more debugging needs
CrewAI58Lower complexity, better abstractions
AutoGen0 (no tag)Minimal active community

LangGraph’s 193 questions versus CrewAI’s 58 reflects two phenomena: LangGraph’s steeper learning curve and its broader production deployment requiring debugging support. Teams using LangGraph in production encounter edge cases, state management complexities, and integration challenges that generate support questions. The absence of an AutoGen tag on Stack Overflow indicates limited active community engagement—users have either migrated or operate in isolation.

Enterprise Adoption Patterns

Enterprise customers reveal distinct framework preferences based on use case complexity:

LangGraph Enterprise Customers: Klarna (financial services agent), Replit (code generation agents), Elastic (observability agents). These companies require stateful execution, audit trails, and sophisticated debugging—LangGraph’s core strengths.

CrewAI Enterprise Customers: Organizations with AMP Suite deployments. CrewAI’s simplicity appeals to teams building agents without dedicated infrastructure expertise.

OpenAI Agents SDK: Growing adoption among OpenAI API customers. The framework’s integration with OpenAI’s model capabilities (including structured outputs and function calling) provides a natural on-ramp for existing OpenAI users.

Analysis Dimension 2: Framework Deprecation Signals and Migration Pressure

The Swarm Deprecation Model

OpenAI’s handling of Swarm provides a template for framework lifecycle management. The repository, now archived with 21,197 stars, displays an unambiguous message in its README:

“Swarm is now replaced by the OpenAI Agents SDK, which is a production-ready evolution of Swarm. We recommend migrating to the Agents SDK for all production use cases.”

The last commit activity occurred in March 2025. No further development is planned. The framework served its purpose as an educational tool demonstrating multi-agent patterns, but OpenAI’s strategic direction clearly favors Agents SDK.

For the 21,197 users who starred Swarm, the migration path is well-defined: transition to Agents SDK, which maintains conceptual similarity while adding production features like guardrails, sessions, and tracing. The conceptual continuity—both frameworks use similar handoff patterns—reduces migration friction.

The AutoGen Transition Challenge

Microsoft’s AutoGen presents a more complex migration scenario. Unlike Swarm’s educational positioning, AutoGen was used in research and some production contexts. The 55,930 stargazers and 404K monthly downloaders face a less clear transition path.

Microsoft recommends Microsoft Agent Framework as the successor, but this framework remains less documented and less battle-tested than alternatives like LangGraph or Agents SDK. The Microsoft Agent Framework GitHub repository shows active development but lacks the enterprise case studies and community knowledge base of more established frameworks.

This creates migration uncertainty with four viable options:

Option 1: Migrate to Microsoft Agent Framework within the Microsoft ecosystem. Best for Azure-centric organizations with existing Microsoft relationships. Risk: framework maturity and documentation gaps.

Option 2: Migrate to LangGraph for production-grade stateful execution with LangSmith observability. Best for complex production use cases requiring debugging and monitoring. Risk: steeper learning curve, LangChain ecosystem dependency.

Option 3: Migrate to OpenAI Agents SDK for simpler API and multi-LLM support. Best for OpenAI API users or teams needing LLM flexibility. Risk: newer framework with less enterprise track record.

Option 4: Migrate to CrewAI for high-level abstractions and independence from LangChain. Best for teams prioritizing simplicity and rapid development. Risk: less control over low-level execution details.

The presence of four viable alternatives suggests the AutoGen user base will fragment across frameworks rather than consolidate on a single successor. This fragmentation benefits no single framework but expands the overall agent development market.

Timeline Pressure

Organizations using AutoGen or Swarm face a finite migration window:

FrameworkLast Significant UpdateMaintenance StatusMigration Window
SwarmMarch 2025DeprecatedImmediate
AutoGenSeptember 2025Maintenance Mode12-18 months

Security vulnerabilities discovered in deprecated frameworks receive slower patches. CrewAI’s recent XXE vulnerability (Issue #4967) demonstrated the speed of active framework response—the fix shipped within days. A similar vulnerability in Swarm would likely remain unpatched, creating risk for remaining users.

The security implications extend beyond patching velocity. Active frameworks benefit from community security reviews, automated scanning in CI/CD pipelines, and coordinated disclosure processes. Deprecated frameworks lack these protections, making them increasingly risky for production use.

Commit Activity as a Leading Indicator

GitHub commit activity patterns provide early warning for framework health:

FrameworkWeekly Commits (12-week avg)Trajectory
CrewAI20-25Very Active
LangGraphActive, weekly releasesStable
OpenAI Agents SDKFrequent releasesGrowing
AutoGenNear zeroDeclining
SwarmNoneDeprecated

CrewAI’s 20-25 weekly commits indicate the highest development velocity among active frameworks. This intensity reflects CrewAI’s position as a newer framework rapidly adding features to compete with established players. LangGraph maintains steady progress with weekly releases, indicating mature development practices. OpenAI Agents SDK shows frequent release cadence consistent with its growth phase.

Analysis Dimension 3: MCP Standardization and Interoperability

The Rise of Model Context Protocol

Anthropic’s Model Context Protocol (MCP), announced in November 2024, has become the dominant standard for AI agent tool integration. The statistics are striking:

  • Official MCP servers repository: 81,621 GitHub stars
  • MCP Python SDK: 22,229 stars
  • Related repositories: 18,159 search results for “mcp”

MCP solves a fundamental problem in agent development: tool and data source integration. Before MCP, each framework implemented proprietary tool interfaces. LangGraph tools didn’t work with CrewAI. OpenAI function calling formats differed from LangChain tool schemas. This fragmentation created vendor lock-in and duplicated effort across the ecosystem.

MCP provides a universal specification for:

  1. Tool Discovery: Agents can discover available tools dynamically through capability advertisements
  2. Resource Access: Standardized interfaces for databases, APIs, and files with consistent authentication patterns
  3. Prompt Templates: Shared prompt libraries across frameworks, reducing duplication

The protocol’s rapid adoption reflects strong industry demand for standardization. Within months of release, every major framework announced MCP support. This velocity suggests MCP addresses real pain points that the ecosystem had been working around rather than solving.

Framework MCP Support Matrix

Every major framework now supports MCP integration:

FrameworkMCP IntegrationImplementation
LangGraphNativelanggraph-mcp-agents adapter (690 stars)
CrewAIEnterpriseenterprise-mcp-server
AutoGenNativeMcpWorkbench class
OpenAI Agents SDKNativetools via MCP
PydanticAINativeBuilt-in support
Google ADKNativeBuilt-in support

The rapid adoption of MCP across all frameworks indicates strong industry demand for interoperability. Developers no longer need to choose frameworks based on tool availability—any MCP-compatible tool works with any framework. This standardization reduces switching costs and enables gradual migration between frameworks.

Enterprise MCP Ecosystem

Enterprise adoption has spawned a secondary ecosystem of MCP management tools:

ToolHive (1,662 stars): Enterprise-grade MCP server management with security, monitoring, and governance features. Provides centralized control over MCP tool access, audit logging, and compliance reporting.

MCP Gateway Registry (505 stars): OAuth authentication and unified access layer for MCP servers. Addresses the authentication gap in the core MCP specification by providing standardized auth flows.

These tools address enterprise requirements: audit logging, access control, and compliance—features absent from the core MCP specification but essential for production deployment. The emergence of this ecosystem indicates MCP is moving beyond early adopters into enterprise environments.

Security Considerations

MCP adoption introduces new attack surfaces. Two notable vulnerabilities emerged in 2025:

MCP Tool Poisoning (AutoGen Issue #7427): Unsigned tool definitions can enable arbitrary code execution. Malicious actors could inject tool definitions that execute harmful code when invoked. Frameworks are implementing tool signing and verification mechanisms to address this.

MCP Authentication Gap (AutoGen Issue #7403): MCP tool integration lacked per-message authentication or integrity verification. Tool invocations could be intercepted or modified in transit. Enterprise MCP gateways now provide OAuth-based authentication to mitigate this risk.

Organizations deploying MCP in production should evaluate these security implications and implement appropriate mitigations. The MCP security model continues to evolve as the community identifies and addresses vulnerabilities.

Key Data Points

MetricLangGraphCrewAIAutoGenOpenAI Agents SDKSource
GitHub Stars26,98046,66955,93020,153GitHub API
PyPI Monthly Downloads41.1M5.77M404K18.2MPyPI Stats
PyPI Daily Downloads1.67M210K15.5K772KPyPI Stats
Open Issues456421687N/AGitHub API
Development StatusActiveVery ActiveMaintenanceActiveGitHub
MCP SupportNativeEnterpriseNativeNativeDocumentation
Enterprise CustomersKlarna, Replit, ElasticAMP SuiteLimitedGrowingOfficial Sites
Learning CurveSteepModerateModerateLowCommunity
Stack Overflow Questions193580N/AStack Exchange
MetricValueSourceDate
MCP Servers Repo Stars81,621GitHub2026-03-20
MCP Python SDK Stars22,229GitHub2026-03-20
CrewAI Certified Developers100,000+CrewAI Official2026-03
Swarm Last Commit2025-03GitHub2025-03
AutoGen Last Major Releasev0.7.5GitHub2025-09
uv (Astral) Stars81,569GitHub2026-03-20
ruff (Astral) Stars46,510GitHub2026-03-20

🔺 Scout Intel: What Others Missed

Confidence: high | Novelty Score: 78/100

The most consequential shift in the agent framework ecosystem is not the rise of any single framework, but the decoupling of tool integration from framework choice. MCP’s emergence as a universal standard means that LangGraph, CrewAI, and OpenAI Agents SDK can now access identical tool ecosystems—a fundamental change from the fragmented landscape of 2023-2024.

Consider the strategic implications: when OpenAI Agents SDK launched, analysts focused on its feature parity with LangGraph and competition for developer mindshare. The deeper signal was OpenAI’s decision to build MCP support from day one. By doing so, OpenAI conceded that it would not build a proprietary tool ecosystem—the real battleground shifted from framework lock-in to orchestration quality, observability depth, and developer experience.

The evidence supports this interpretation. The MCP servers repository (81,621 stars) has accumulated more social proof than any individual framework repository. Enterprise tooling like ToolHive and MCP Gateway Registry addresses production requirements that frameworks themselves avoided. The AutoGen community’s RFC for AMP (Agent Message Protocol) in Issue #7415 reveals appetite for even deeper standardization—cross-framework agent discovery that would enable LangGraph agents to invoke CrewAI agents transparently.

Key Implication: Organizations should evaluate frameworks based on orchestration capabilities, observability integrations, and team expertise—not tool availability. The MCP layer has commoditized tool access, making framework selection a question of workflow complexity and operational requirements rather than ecosystem breadth.

Outlook & Predictions

Near-term (0-6 months)

AutoGen Migration Wave: Organizations using AutoGen will begin migration planning. Expect increased activity on migration guides and tooling. Microsoft Agent Framework documentation will improve to capture this user base, but migration will fragment across multiple frameworks rather than consolidate on Microsoft’s successor.

OpenAI Agents SDK Growth: Monthly downloads will likely exceed 25 million, approaching 60% of LangGraph’s volume. OpenAI’s distribution advantage (bundling with OpenAI API access, integration with OpenAI models) accelerates adoption. The framework’s appeal extends beyond OpenAI loyalists through its multi-LLM support.

MCP Security Hardening: Major frameworks will implement tool signing and verification to address tool poisoning vulnerabilities. Enterprise MCP gateways will become standard for production deployments. The MCP security model will mature rapidly as enterprise adoption exposes gaps.

Confidence Level: High. All three predictions follow from current trends and announced deprecations.

Medium-term (6-18 months)

Framework Consolidation: The viable framework landscape will consolidate to 3-4 major players: LangGraph (enterprise complex workflows), OpenAI Agents SDK (OpenAI ecosystem), CrewAI (rapid development, SMB market), and potentially Google ADK (Google Cloud customers). Smaller frameworks will either specialize for niche use cases or fade.

Microsoft Agent Framework Positioning: Microsoft will invest significantly in Microsoft Agent Framework to retain AutoGen users within the Azure ecosystem. Success depends on documentation quality, Azure integration depth, and enterprise support commitments. Competitive pressure from LangGraph and Agents SDK will intensify.

Cross-Framework Agent Collaboration: MCP will enable agents from different frameworks to collaborate. A LangGraph orchestrator might invoke CrewAI agents for specific tasks. This interoperability layer reduces framework lock-in and enables hybrid architectures.

Confidence Level: Medium-High. Consolidation is already visible; cross-framework collaboration depends on MCP maturity and framework support.

Long-term (18+ months)

Commoditization of Orchestration: Agent orchestration becomes a commodity. Frameworks differentiate on observability, enterprise features, and ecosystem integrations rather than core orchestration capabilities. The MCP layer abstracts away tool integration differences.

Emergence of Agent Runtimes: Platform-agnostic agent runtimes (like agent-kernel with 25 stars currently) may gain traction, allowing framework-agnostic deployment similar to how container runtimes standardized application deployment. These runtimes would handle execution, scaling, and monitoring across framework implementations.

Standardization of Agent Interfaces: MCP and similar protocols will evolve into formal standards (potentially under ISO/IEEE or W3C), enabling true plug-and-play agent ecosystems. The industry will converge on common patterns for agent discovery, invocation, and monitoring.

Confidence Level: Medium. Long-term predictions depend on market evolution and competitive dynamics not yet visible.

Key Trigger to Watch

PyPI Download Parity: If OpenAI Agents SDK monthly downloads exceed LangGraph within the next 12 months, it signals OpenAI’s ecosystem dominance in the agent framework space. Monitor PyPI Stats API for download trends. The crossover point would validate OpenAI’s strategy of building a production framework on Swarm’s conceptual foundation while extending it with enterprise features.

Sources

x9yg9w3hsmqsemdkztuaz░░░y1sav55d0cyle0gzyd52knmdzyrd7xho████pbhh4uf0uwb6umgkqixgn4mdo93iksqjo░░░42xb9mjielxoskpu5m91pap74mss6yipq░░░w4i7cutsszkzhk9uputam875uyydvfhbr░░░qbdbqyny99k7ng1kxfowge758m5w68zn9████68f75by72b6xh2kk0l7sagq2txeut6zon░░░mq4dqkpr9h9wjnlun2t0rs7y6detqwne████rdns8suubrex2buvl0ucx0u1i5be8wxyo████j7lnkcztiwidnuwpc0ufpxd0gkmonze████4mz6bzc75c7ngxthw0b8y3u06jbjkpix░░░7lg9d3cic7m0phz58a2xi9uyuwt2eqpf████eo3hetwwn2jw4m41knlg7gfic09s7e5c████q0rljhclwq9746pd1yfdqb1ryelakm24v░░░8cf9lybg47xuzaq66k4xmm5jd3ysrhukg░░░6fgzs28kelywnwsn7vstx0bgppoz926d6░░░5ahk9twmb5y6g3u2zih60gbkknl68q91r░░░05ovevrykwd407xlfgogbo8dpxm2mc3stp░░░aq2l6h1fdjtylmep1lrhulbn2pv72lb░░░b04blqtlin8nn4sah0y2xjaoawvfcdr░░░6vqjwy9gzgoqyt6tdk6p9hxd2apelemdm░░░59lgqaxgxerjo6waohf1aakfw2xm3hfq████qoa9phochpppngg0cs5zkrcemnbdmb5uu░░░2n05pejmvx2er1w2r367zu01jhjr1h0z76░░░jcgpsqh0blcg7pareiy4muaxzp6wx5a████s5lkiqzz2kzqk1cc01vi8a09pi6ch5mw████h621tdwvq0h35i2uvng3itnb382ty6mwq████2ziwtv74vgp4pxir3wva95m469x8o5po░░░axezl5wk0isnd6arla6skn0zryosca6░░░278ecodajdx6q2va2w9u062iw9wkitdki████3m4oef23np933v83ov18y7x43faqbkt7░░░syaesolvx98nmzox2qsgajhbtc6kgrbl████qc5hpkn1f5cp8ndkp7vlsd0bf23jsoktu7░░░zd2ygc7dmjoo63vjo0ro89x2b9gl55np░░░dskzzgdp5ifxa6q519ohiisbtmyl37f6░░░f7aaygji23i6zlbz8aaxqigq15dqwpk████l2sac2mai79rtinneanm7hylke0mj6ph░░░cd6i7hojvli5zy8o4n2cce0r3xx52b5ug░░░68lnc1z3ticxt03w3f2ow15k6iko5c6f░░░bdibeucx4536hfqyqlrgbfb58uwuhmt8░░░niovt0h4xcaem5zb3eh1a8vzhhv4wokb████yq8hstpitfpp10pu826ntgg0clj2bvbf████uihyz5rswe3isk6g8ruhptmxiktjoiik████or1s8wsyunqz11agknp6aw69y2hd9rv░░░gw9micq1k735s9oasrmrp75r5urwznzp░░░mswdn4vc2n5tuq2doicf4ynurblby98░░░6vah1y8o6cs5xrqwz5blzxt1rs77j04v░░░z2zt1x55ifilhl105dm42h5zv6sogdbir░░░pvt4za878qd40dzrv09wkqvprdsq1jed░░░sd6hwpfjedpt29wep045av9n33uvlbhb████c7lvufy8q4e