AgentScout Logo Agent Scout

Hermes Agent Hits 95K Stars, Ships Self-Improving AI Framework

Hermes Agent v0.10.0 reaches 95,600 GitHub stars in 8 weeks with 118 bundled skills and three-layer memory architecture enabling autonomous skill creation.

AgentScout Β· Β· Β· 4 min read
#ai-agents #nous-research #hermes #self-improving #open-source #github
Analyzing Data Nodes...
SIG_CONF:CALCULATING
Verified Sources

TL;DR

Nous Research released Hermes Agent v0.10.0 with a self-improving learning loop that autonomously creates and refines skills from user interactions. The open-source framework reached 95,600 GitHub stars in 8 weeks, making it the fastest-growing agent project to date.

Key Facts

  • Who: Nous Research, an AI research organization focused on open-source agent frameworks
  • What: Hermes Agent v0.10.0 with 118 bundled skills, six messaging integrations, and three-layer memory architecture
  • When: April 2026 release; project launched February 2026
  • Impact: 95,600 GitHub stars in 8 weeks, zero agent-specific CVEs, MiniMax M2.7 model integration

What Changed

Nous Research announced Hermes Agent v0.10.0 on April 21, 2026, introducing a self-improving learning loop that represents a shift from static AI assistants to agents that evolve through experience. The framework ships with 118 bundled skills covering file operations, web scraping, API integrations, and code execution, along with six messaging platform integrations including Discord, Slack, and Telegram.

The release departs from traditional agent architectures that rely on predefined tool sets. Instead, Hermes analyzes user interactions and automatically generates new skills when it encounters repeated patterns, then iteratively improves those skills based on success rates and user feedback.

GitHub metrics show the project reached 95,600 stars within approximately 8 weeks of its February 2026 launch. According to the official Nous Research documentation, the repository averaged over 1,500 stars per day during peak periods, exceeding the growth trajectories of comparable frameworks like LangGraph (reached 80,000 stars in 14 weeks) and CrewAI (reached 65,000 stars in 12 weeks).

Why It Matters

The self-improving architecture addresses a core limitation of current agent systems: the manual effort required to expand capabilities. Traditional frameworks require developers to code individual tools, test integrations, and maintain compatibility as underlying APIs change. Hermes automates this cycle.

Key technical specifications:

  • Three-layer memory: Working memory for active tasks, episodic memory for interaction history, and semantic memory for distilled knowledge
  • Skill synthesis engine: Generates new skills from observed user patterns without explicit programming
  • Zero CVEs: No agent-specific security vulnerabilities reported as of April 2026
  • MiniMax partnership: Native integration with M2.7 model for enhanced reasoning capabilities

β€œThe framework creates a positive feedback loop where every user interaction potentially improves the system,” notes the TokenMix technical review. β€œSkills that fail get refined; successful patterns get promoted.”

The MiniMax partnership positions Hermes as a multi-model agent platform rather than being locked to a single LLM provider. This flexibility contrasts with OpenAI’s Agents SDK, which optimizes primarily for GPT models.

The zero CVE record deserves attention given the security concerns surrounding agent frameworks. Agent-specific vulnerabilities typically emerge from tool execution boundaries, file system access patterns, and prompt injection vectors. The clean record suggests architectural choices that sandbox skill execution effectively.

Comparison Table

DimensionHermes AgentLangGraphCrewAIOpenAI Agents SDK
Self-improvingYesNoNoLimited
Bundled skills118~20~3545
GitHub stars (Apr 2026)95,60082,00068,000127,000
Time to 95K stars8 weeks14 weeks12 weeks4 weeks
Multi-model supportYesYesYesLimited
Agent CVEs0321

πŸ”Ί Scout Intel: What Others Missed

Confidence: high | Novelty Score: 92/100

Media coverage focuses on star counts and feature lists, but the deeper signal is the competitive dynamics this release triggers. Hermes achieved 95,600 stars in 8 weeks while LangGraph took 14 weeks to reach 80,000β€”Hermes grew 2.1x faster despite launching later. This growth rate suggests the market values self-improvement over ecosystem maturity. More critically, the MiniMax M2.7 integration signals an alternative to OpenAI-centric agent stacks at a time when enterprises seek vendor diversification. LangChain and CrewAI now face pressure to either match the self-improving capability or differentiate on enterprise featuresβ€”both paths require substantial R&D investment that Hermes has already validated.

Key Implication: Enterprises evaluating agent frameworks should prioritize self-improving architectures over static tool catalogs, as the maintenance cost differential compounds over time.

What This Means

For developers: The framework reduces the barrier to building production-ready agents. Instead of coding 50 individual tools, developers configure the self-improvement parameters and let the system learn from usage patterns. The tradeoff is reduced control over exactly how the agent accomplishes tasks.

For enterprises: The MiniMax integration provides an alternative to OpenAI-centric agent stacks. Organizations already using Chinese LLM providers for regulatory or performance reasons can deploy Hermes without maintaining separate tool sets.

For the agent ecosystem: Hermes validates the self-improving architecture as a viable approach. Competitors will likely respond with similar capabilities, potentially shifting the competitive frontier from β€œwho has more tools” to β€œwho learns faster.”

What to Watch:

  • Enterprise adoption metrics: Watch for case studies from organizations deploying Hermes in production. The self-improvement claim needs real-world validation beyond GitHub stars.
  • Security research: As adoption grows, security researchers will probe the skill synthesis engine for vulnerabilities. The current zero-CVE record will be tested.
  • Competitive response: LangChain, CrewAI, and OpenAI may accelerate their own learning capabilities. Hermes has an 8-week head start on the self-improving architecture.

Sources

Hermes Agent Hits 95K Stars, Ships Self-Improving AI Framework

Hermes Agent v0.10.0 reaches 95,600 GitHub stars in 8 weeks with 118 bundled skills and three-layer memory architecture enabling autonomous skill creation.

AgentScout Β· Β· Β· 4 min read
#ai-agents #nous-research #hermes #self-improving #open-source #github
Analyzing Data Nodes...
SIG_CONF:CALCULATING
Verified Sources

TL;DR

Nous Research released Hermes Agent v0.10.0 with a self-improving learning loop that autonomously creates and refines skills from user interactions. The open-source framework reached 95,600 GitHub stars in 8 weeks, making it the fastest-growing agent project to date.

Key Facts

  • Who: Nous Research, an AI research organization focused on open-source agent frameworks
  • What: Hermes Agent v0.10.0 with 118 bundled skills, six messaging integrations, and three-layer memory architecture
  • When: April 2026 release; project launched February 2026
  • Impact: 95,600 GitHub stars in 8 weeks, zero agent-specific CVEs, MiniMax M2.7 model integration

What Changed

Nous Research announced Hermes Agent v0.10.0 on April 21, 2026, introducing a self-improving learning loop that represents a shift from static AI assistants to agents that evolve through experience. The framework ships with 118 bundled skills covering file operations, web scraping, API integrations, and code execution, along with six messaging platform integrations including Discord, Slack, and Telegram.

The release departs from traditional agent architectures that rely on predefined tool sets. Instead, Hermes analyzes user interactions and automatically generates new skills when it encounters repeated patterns, then iteratively improves those skills based on success rates and user feedback.

GitHub metrics show the project reached 95,600 stars within approximately 8 weeks of its February 2026 launch. According to the official Nous Research documentation, the repository averaged over 1,500 stars per day during peak periods, exceeding the growth trajectories of comparable frameworks like LangGraph (reached 80,000 stars in 14 weeks) and CrewAI (reached 65,000 stars in 12 weeks).

Why It Matters

The self-improving architecture addresses a core limitation of current agent systems: the manual effort required to expand capabilities. Traditional frameworks require developers to code individual tools, test integrations, and maintain compatibility as underlying APIs change. Hermes automates this cycle.

Key technical specifications:

  • Three-layer memory: Working memory for active tasks, episodic memory for interaction history, and semantic memory for distilled knowledge
  • Skill synthesis engine: Generates new skills from observed user patterns without explicit programming
  • Zero CVEs: No agent-specific security vulnerabilities reported as of April 2026
  • MiniMax partnership: Native integration with M2.7 model for enhanced reasoning capabilities

β€œThe framework creates a positive feedback loop where every user interaction potentially improves the system,” notes the TokenMix technical review. β€œSkills that fail get refined; successful patterns get promoted.”

The MiniMax partnership positions Hermes as a multi-model agent platform rather than being locked to a single LLM provider. This flexibility contrasts with OpenAI’s Agents SDK, which optimizes primarily for GPT models.

The zero CVE record deserves attention given the security concerns surrounding agent frameworks. Agent-specific vulnerabilities typically emerge from tool execution boundaries, file system access patterns, and prompt injection vectors. The clean record suggests architectural choices that sandbox skill execution effectively.

Comparison Table

DimensionHermes AgentLangGraphCrewAIOpenAI Agents SDK
Self-improvingYesNoNoLimited
Bundled skills118~20~3545
GitHub stars (Apr 2026)95,60082,00068,000127,000
Time to 95K stars8 weeks14 weeks12 weeks4 weeks
Multi-model supportYesYesYesLimited
Agent CVEs0321

πŸ”Ί Scout Intel: What Others Missed

Confidence: high | Novelty Score: 92/100

Media coverage focuses on star counts and feature lists, but the deeper signal is the competitive dynamics this release triggers. Hermes achieved 95,600 stars in 8 weeks while LangGraph took 14 weeks to reach 80,000β€”Hermes grew 2.1x faster despite launching later. This growth rate suggests the market values self-improvement over ecosystem maturity. More critically, the MiniMax M2.7 integration signals an alternative to OpenAI-centric agent stacks at a time when enterprises seek vendor diversification. LangChain and CrewAI now face pressure to either match the self-improving capability or differentiate on enterprise featuresβ€”both paths require substantial R&D investment that Hermes has already validated.

Key Implication: Enterprises evaluating agent frameworks should prioritize self-improving architectures over static tool catalogs, as the maintenance cost differential compounds over time.

What This Means

For developers: The framework reduces the barrier to building production-ready agents. Instead of coding 50 individual tools, developers configure the self-improvement parameters and let the system learn from usage patterns. The tradeoff is reduced control over exactly how the agent accomplishes tasks.

For enterprises: The MiniMax integration provides an alternative to OpenAI-centric agent stacks. Organizations already using Chinese LLM providers for regulatory or performance reasons can deploy Hermes without maintaining separate tool sets.

For the agent ecosystem: Hermes validates the self-improving architecture as a viable approach. Competitors will likely respond with similar capabilities, potentially shifting the competitive frontier from β€œwho has more tools” to β€œwho learns faster.”

What to Watch:

  • Enterprise adoption metrics: Watch for case studies from organizations deploying Hermes in production. The self-improvement claim needs real-world validation beyond GitHub stars.
  • Security research: As adoption grows, security researchers will probe the skill synthesis engine for vulnerabilities. The current zero-CVE record will be tested.
  • Competitive response: LangChain, CrewAI, and OpenAI may accelerate their own learning capabilities. Hermes has an 8-week head start on the self-improving architecture.

Sources

a22vsfhi3h55vhftdq9vtjβ–ˆβ–ˆβ–ˆβ–ˆirs6o3t1ojmowzgm9n9jgur3go7z6sn9β–ˆβ–ˆβ–ˆβ–ˆqggf6m98rj33t68x8j8yg7wthn4xhipyβ–‘β–‘β–‘dzm8u5mf41dqg5ysf0t0xh47u2mua2nrβ–‘β–‘β–‘u0eitfxtd28prshz9k2o1t5wptepyamaβ–‘β–‘β–‘gzj7t1qpzwkz2vjbti081mdb4ebjpulxsβ–ˆβ–ˆβ–ˆβ–ˆcf23kx79o79aevao4tl7jnkbrtr74x9oβ–‘β–‘β–‘jk9yqyna00iar88k3u1vq7ujixfjk6gzpβ–ˆβ–ˆβ–ˆβ–ˆay5s1jgazethp8mw1rd91fdzkh2wzyninβ–‘β–‘β–‘302hg2c4qj8ngo1efao2igudkre9cdc5β–ˆβ–ˆβ–ˆβ–ˆ0xhh0tmid7bjpn5572sshbh6nfx4gvmtm2β–‘β–‘β–‘7zcxg4uku9h3lcq4nzdra4q9mtzckr619β–‘β–‘β–‘wy8r6vwix5fivmx5dv7q4s0rs0hbz32β–ˆβ–ˆβ–ˆβ–ˆ4zu00kw8tmw6g14b00z66cbo40rf19n9β–ˆβ–ˆβ–ˆβ–ˆl3b5kpr37blo40cbljxiondhmxy1x20ywβ–‘β–‘β–‘bjygvjn5qxu79vnjk2jfc19dbmm4vyfsβ–ˆβ–ˆβ–ˆβ–ˆdrmd92ba2qm5kylq4e1h1eopv3x7416plβ–ˆβ–ˆβ–ˆβ–ˆe6vlrumdr4ask4hq5ck3i92fg8s1i0jjeβ–ˆβ–ˆβ–ˆβ–ˆe1zhbc46khaqi4c2poxobssz1o2etkbβ–ˆβ–ˆβ–ˆβ–ˆ9iji8wxdm9obo7zldwb9cs2298yt3esidβ–ˆβ–ˆβ–ˆβ–ˆl8ak21tfy3gmg7xydv6hzp3dfqj5j75s3β–ˆβ–ˆβ–ˆβ–ˆs3ifsuy3zeclzpueyfsf5ieed9ynlzlrnβ–‘β–‘β–‘qfqeoxw7qsw6wz1dqb98qx5ymrbwvzeβ–‘β–‘β–‘rynd4qsdxgkcwly8tuylhtakbzwt25dbhβ–ˆβ–ˆβ–ˆβ–ˆnikvcrs0ruehwip3sdvia4oaaa21fvkaoβ–‘β–‘β–‘533xnddw6wtslxf6sqmuwolcqx70birzβ–‘β–‘β–‘8a2q0z1k6umsty5y8dxsje7pdaddbv49β–‘β–‘β–‘k2x5hc0twekaihftym6ajcl2c43ugxyeβ–‘β–‘β–‘ee4hd89sq4cd62udrlp69mnpkneop9tocβ–‘β–‘β–‘t5c5ga65mwo6anzfdb4018lsir1uiwz3fβ–‘β–‘β–‘p3zm2bdm44dvsafgn9gf9kaav1x9v6dkβ–ˆβ–ˆβ–ˆβ–ˆ3sv639c25w2syvlapj2ba9uaihu4hf59jβ–‘β–‘β–‘aoo9ohtcvgqk0rkhzn90liqiuu8hmgvkβ–ˆβ–ˆβ–ˆβ–ˆnn2rdwtyn6jpscexnzckehmggwgomoq9kβ–‘β–‘β–‘xzmmtz03amr6u5oyicjwkw9m8mm1t15ubβ–ˆβ–ˆβ–ˆβ–ˆalbsy4qdcfdbnc54i6fy7lpkhg8ee50qβ–ˆβ–ˆβ–ˆβ–ˆ2my9gqsqk3pts3pdd3mr7q1dtse2vqy1rβ–‘β–‘β–‘vluu7faquknximume2tatm2pmnxiv7tlmβ–‘β–‘β–‘d8ey17noo9e0667ut00frov0zfngs7w6giβ–‘β–‘β–‘2cvhpkn9nnq6jy13q3lz9riaa8doc8xnβ–‘β–‘β–‘4bstbhlja86gu0drrptuaqmkjq95bddoβ–‘β–‘β–‘z5jeiorqb263svfei8ff7nsarz1yo8β–‘β–‘β–‘gsyz2hjduxawqzbnxk7yszzi2jpg2f1β–ˆβ–ˆβ–ˆβ–ˆyvhmxh0glpggxmqz7ito7yrvyt8i7qvgβ–‘β–‘β–‘66luwr3jhdx87wot3hmzz4qe55zg4rja8β–‘β–‘β–‘qjxqrhs0sm4h1obkmfcvuib78754a7naβ–‘β–‘β–‘2ll35b9hk5xlnbmq9var2cstb0wyw679β–‘β–‘β–‘zyg3qzedlbhh9so8evoaza6r545dmyq06β–ˆβ–ˆβ–ˆβ–ˆgvsc06b1qzwswax94z2gsjtubefo1cr4gβ–ˆβ–ˆβ–ˆβ–ˆ5zn723fbsz2b5dyna60ehh8bfnkebkhx6β–ˆβ–ˆβ–ˆβ–ˆhmm1ffzq1rs