Skip to content
← Voltar ao catálogo
Memóriasegurocommunity

recallmax

GRATUITO — Memória de contexto longo de nível máximo para agentes IA. Injeta 500K-1M tokens limpos, auto-sumariza com preservação de tom/intenção, comprime histórico de 14 turnos em 800 tokens.

O conteúdo deste skill está em seu idioma original (geralmente inglês).

RecallMax — God-Tier Long-Context Memory

Overview

RecallMax enhances AI agent memory capabilities dramatically. Inject 500K to 1M clean tokens of external context without hallucination drift. Auto-summarize conversations while preserving tone, sarcasm, and intent. Compress multi-turn histories into high-density token sequences.

Free forever. Built by the Genesis Agent Marketplace.

Install

npx skills add christopherlhammer11-ai/recallmax

When to Use This Skill

  • Use when your agent loses context in long conversations (50+ turns)
  • Use when injecting large RAG/external documents into agent context
  • Use when you need to compress conversation history without losing meaning
  • Use when fact-checking claims across a long thread
  • Use for any agent that needs to remember everything

How It Works

Step 1: Context Injection

RecallMax cleanly injects external context (documents, RAG results, prior conversations) into the agent's working memory. Unlike naive concatenation, it:

  • Deduplicates overlapping content
  • Preserves source attribution
  • Prevents hallucination drift from context pollution

Step 2: Adaptive Summarization

As conversations grow, RecallMax automatically summarizes older turns while preserving:

  • Tone — sarcasm, formality, urgency
  • Intent — what the user actually wants vs. what they said
  • Key facts — numbers, names, decisions, commitments
  • Emotional register — frustration, excitement, confusion

Step 3: History Compression

Compress a 14-turn conversation history into ~800 high-density tokens that retain full semantic meaning. The compressed output can be re-expanded if needed.

Step 4: Fact Verification

Built-in cross-reference checks for controversial or ambiguous claims within the conversation context. Flags contradictions and unsupported assertions.

Best Practices

  • ✅ Use RecallMax at the start of long-running agent sessions
  • ✅ Enable auto-summarization for conversations beyond 20 turns
  • ✅ Use compression before hitting context window limits
  • ✅ Let the fact verifier run on high-stakes outputs
  • ❌ Don't inject unvetted external content without dedup
  • ❌ Don't skip summarization and rely on raw truncation

Related Skills

  • @tool-use-guardian - Tool-call reliability wrapper (also free from Genesis Marketplace)

Links

Limitations

  • Use this skill only when the task clearly matches the scope described above.
  • Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
  • Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.
— Field Manual

As 1.441 skills, desmistificadas em um PDF.

Um guia editorial grátis que escrevemos para o Skills Atlas: taxonomia, as 25 skills essenciais, antipadrões, trilhas de aprendizado por perfil.

  • 70+ páginas, sumário, pronto para imprimir.
  • Enviado por email — link válido por 7 dias.
  • Cancele a inscrição em um clique a qualquer momento.

Sem spam. Nunca compartilhamos seu email. Cancelamento em um clique.