AI for Content Creators

Top video, audio, image, and writing tools curated for content creators building audiences with AI.

Top Creator Tools

View all

Top Repos

View all

langgenius/dify

Dify is an open-source platform for building production-grade AI applications through a visual drag-and-drop workflow builder. Instead of writing boilerplate code to chain LLM calls, manage prompts, and wire up retrieval pipelines, developers lay out their logic on a canvas -- connecting model nodes, tool calls, conditional branches, and human-in-the-loop checkpoints into executable workflows. The result is a system that can go from prototype to production deployment without rewriting the orchestration layer. The platform supports hundreds of LLM providers out of the box: OpenAI GPT models, Anthropic Claude, Mistral, Llama 3, Qwen, and any provider exposing an OpenAI-compatible API. Switching between models is a dropdown change, not a code refactor. This provider-agnostic design means teams can start with a cloud API, benchmark against alternatives, and migrate to self-hosted models without touching their workflow logic. Dify ships with a full RAG pipeline built in. Upload PDFs, presentations, or plain text, and the system handles chunking, embedding, vector storage, and retrieval. Version 1.12.0 introduced Summary Index, which attaches AI-generated summaries to document chunks so semantically related content clusters together during retrieval. Version 1.13.0 added multimodal retrieval that unifies text and images into a single semantic space for vision-enabled reasoning. The agent capabilities layer supports both function-calling and ReAct-style reasoning with over 50 built-in tools including Google Search, DALL-E, Stable Diffusion, and WolframAlpha. Since v1.0.0, all models and tools have been migrated to a plugin architecture, so extending Dify with custom integrations no longer requires forking the core codebase. On the operational side, Dify includes a Prompt IDE for comparing model outputs side-by-side, LLMOps-grade logging and performance monitoring, and a Backend-as-a-Service API layer that lets frontend applications consume workflows through REST endpoints. The v1.13.0 release added a Human Input node that pauses workflows for human review, enabling approval gates and content moderation loops directly within automated pipelines. Deployment is flexible: Docker Compose for quick self-hosted setups (minimum 2 CPU cores, 4 GB RAM), Kubernetes via five community-maintained Helm charts, Terraform templates for Azure and Google Cloud, and AWS CDK for infrastructure-as-code deployments. Dify Cloud offers a managed option with 200 free GPT-4 calls to get started. Enterprise customers including Volvo Cars and Kakaku.com run Dify in production -- Kakaku.com reported 75% employee adoption with nearly 950 internal AI applications built on the platform.

132,700 stars

anomalyco/opencode

118,000 stars in under a year. That's not a typo. OpenCode went from zero to the most-starred AI coding agent on GitHub faster than any developer tool in memory -- and it did it by being the one thing Cursor and Claude Code refuse to be: completely open source, completely provider-agnostic, and completely free. Here's the pitch that mass-converted developers: bring your own model, bring your own keys, keep your own data. OpenCode doesn't care if you're running Claude Opus, GPT-4.1, Gemini 2.5, or a local Llama instance through Ollama. It treats every provider as a first-class citizen. If you've ever felt locked into Anthropic's pricing because Claude Code only works with Claude, or locked into Cursor's $20/month because switching means losing your workflow -- OpenCode is the exit door 5 million developers already walked through. The architecture is what makes it stick. OpenCode runs a client/server split -- the AI agent runs as a background server while the TUI, desktop app, or IDE extension connects as a client. That means you can run the agent on a beefy remote machine and code from a thin laptop over SSH. Try doing that with Cursor. Two built-in agents handle different workflows: a build agent for writing and modifying code, and a read-only plan agent for exploring codebases without accidentally changing anything. There's also a general subagent that handles complex multi-step searches. LSP integration gives it real code intelligence -- not just pattern matching, but actual type-aware navigation and diagnostics. The release velocity tells its own story: 731 releases, 10,045 commits, v1.2.21 shipped March 7, 2026. The team at Anomaly (the same folks behind terminal.shop) ships daily. MCP support means you can extend it with the same server ecosystem Claude Code uses. Install takes one curl command. You're coding in 30 seconds.

121,683 stars

infiniflow/ragflow

RAGFlow is a leading open-source Retrieval-Augmented Generation engine that fuses deep document understanding with agentic AI capabilities to build a superior context layer for large language models. Unlike general-purpose RAG frameworks, RAGFlow specializes in extracting structured knowledge from complex, visually rich documents — including PDFs with tables, multi-column layouts, images, scanned copies, spreadsheets, slides, and web pages — with high fidelity. The platform provides template-based intelligent chunking with visual customization, high-precision hybrid search combining vector search, BM25, and custom scoring with advanced re-ranking, and grounded citations that reduce hallucinations by linking every answer back to traceable source references. RAGFlow includes a visual workflow builder for designing agentic RAG pipelines with memory support, Model Context Protocol (MCP) integration, and multi-modal model support for processing images within documents. It ships with Docker-based deployment in both lightweight (2 GB) and full-featured (9 GB) configurations, supports Elasticsearch and Infinity as storage backends, and works with configurable LLMs and embedding models. With 74,000+ GitHub stars and an Apache 2.0 license, RAGFlow has become one of the most popular open-source RAG solutions, particularly for enterprise use cases in equity research, legal analysis, and manufacturing where document intelligence is critical.

74,955 stars

msitarzewski/agency-agents

Agency-Agents is a production-ready collection of 144+ specialized AI agent personas organized across 12 divisions — Engineering, Design, Paid Media, Sales, Marketing, Product, Project Management, Testing, Support, Spatial Computing, Game Development, and Specialized. Each agent is a structured Markdown file that gives any LLM a specific professional identity, complete with domain expertise, a distinct communication style, battle-tested workflows, concrete deliverables, and measurable success metrics. Rather than relying on generic prompting, you activate a focused expert — a Frontend Wizard, Security Engineer, Brand Guardian, UX Researcher, or one of 140+ others — and the LLM narrows its context accordingly, reducing hallucinations and enforcing domain best practices. The project was born from a Reddit discussion about AI agent specialization and grew through months of community iteration into one of the fastest-starred repositories on GitHub. It integrates natively with Claude Code's /agents system by placing Markdown files in ~/.claude/agents/, and ships automated install scripts that convert agents for Cursor, Aider, Windsurf, Gemini CLI, OpenCode, and more. With 43.9K stars and 6.6K forks as of March 2026, agency-agents has become the de facto starting point for teams that want to run structured multi-agent workflows from their IDE.

43,880 stars

ItzCrazyKns/Perplexica

Perplexica is a privacy-focused, open-source AI-powered answering engine designed to run entirely on your own hardware. Often described as an open-source alternative to Perplexity AI, Perplexica combines real-time internet search with the intelligence of large language models to deliver accurate, cited answers to complex queries without compromising user privacy. At its core, Perplexica leverages SearxNG as its search backbone, a privacy-respecting metasearch engine that aggregates results from multiple sources without tracking users. The retrieved results are then processed through a Retrieval-Augmented Generation (RAG) pipeline, where an LLM synthesizes the information into coherent, source-cited responses. This architecture ensures that every answer is grounded in verifiable web content rather than relying solely on the model's training data. One of Perplexica's most compelling features is its multi-model flexibility. Users can connect to virtually any LLM provider, including OpenAI, Anthropic Claude, Google Gemini, Groq, and locally hosted models through Ollama. This means developers and privacy-conscious users can run the entire stack on-premises with no data leaving their network, or mix cloud and local models depending on the task. Perplexica offers three distinct search modes tailored to different needs. Speed Mode prioritizes quick answers for simple lookups. Balanced Mode handles everyday research with a good tradeoff between depth and response time. Quality Mode performs deep, multi-step research for thorough investigation of complex topics. Beyond web search, the engine supports academic paper search, discussion forum search, image and video search, and domain-restricted queries. The platform also includes smart contextual widgets that surface relevant quick-lookup information such as weather forecasts, mathematical calculations, and stock prices directly in the search interface. Users can upload files including PDFs, text documents, and images for the AI to analyze alongside web results. All search history is saved locally, giving users full control over their data. With over 31,000 GitHub stars, 3,300 forks, and 44 contributors, Perplexica has established itself as one of the most popular open-source AI search projects. The project ships with Docker support for easy deployment, including a bundled SearxNG option that gets everything running with a single command. One-click deployment is also available through platforms like Sealos, RepoCloud, and Hostinger. A developer-facing API allows integration of Perplexica's search capabilities into custom applications and workflows.

32,909 stars

QwenLM/Qwen3

Qwen3 is the flagship open-weight large language model series from Alibaba Cloud's Qwen team, offering one of the most comprehensive lineups in the open-source AI ecosystem. The repository serves as the central hub for a family of models spanning dense architectures (0.6B, 1.7B, 4B, 8B, 14B, and 32B parameters) and mixture-of-experts designs (30B-A3B and 235B-A22B), giving developers and researchers granular control over the compute-performance tradeoff for their specific deployment scenario. What distinguishes Qwen3 from other open-weight model families is its hybrid thinking architecture. Every model in the series supports seamless switching between a step-by-step reasoning mode for complex logic, mathematics, and code generation, and a rapid non-thinking mode for straightforward queries. Users can configure thinking budgets to balance latency against reasoning depth, making the models adaptable to both real-time applications and offline batch processing. Trained on approximately 36 trillion tokens, double the training corpus of Qwen2.5, the models demonstrate strong multilingual capabilities across 119 languages and dialects. Context windows range from 32K tokens on smaller models to 128K on larger variants, with experimental support extending to 1 million tokens in the Qwen3-2507 update released in August 2025. The series has evolved further with Qwen3.5, which introduced compact models from 0.8B to 9B parameters optimized for on-device deployment using a hybrid Gated Delta Networks and sparse MoE architecture. Qwen3 integrates natively with popular inference frameworks including vLLM, SGLang, TensorRT-LLM, llama.cpp, and Ollama, and ships with enhanced agentic capabilities for tool calling and MCP (Model Context Protocol) support. All models are released under the Apache 2.0 license and available on Hugging Face, ModelScope, and Kaggle.

26,896 stars

Creator Courses

View all

Weekly AI Digest