AI for Developers

Top coding tools, MCP servers, open-source repos, and tutorials curated for software engineers.

Top Coding Tools

View all

Latest Developer Articles

View all

Top Repos

View all

openclaw/openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

310,753 stars

n8n-io/n8n

n8n is an open-source workflow automation platform that combines a visual, no-code interface with the flexibility of custom JavaScript and Python code execution. With over 150,000 GitHub stars as of March 2026, it has become one of the most popular automation tools in the open-source ecosystem, rivaling commercial platforms like Zapier and Make while offering complete data sovereignty. What sets n8n apart is its fair-code license model — the source code is always visible and available, self-hosting is free, and the platform can be extended without limits. For teams that need workflow automation but cannot send data to third-party services due to compliance requirements (HIPAA, GDPR, SOC 2), n8n's self-hosted option is a game-changer. The platform now includes native AI capabilities with built-in nodes for OpenAI, Anthropic, Google Gemini, and local model providers. Developers can build AI agent workflows that chain LLM calls with database queries, API integrations, and conditional logic — all through a visual canvas. This has made n8n a popular choice for building custom AI agents without writing complex orchestration code. n8n connects to over 400 services out of the box, including databases (PostgreSQL, MySQL, MongoDB), APIs (Slack, GitHub, Jira, Salesforce), and file storage (S3, Google Drive). Custom integrations can be built using the HTTP Request node or by creating community nodes in TypeScript. The workflow engine supports error handling, retries, sub-workflows, and webhook triggers for event-driven automation.

181,174 stars

ollama/ollama

Ollama is an open-source platform written in Go that makes running large language models locally as straightforward as a single terminal command. Where tools like llama.cpp expose the raw inference engine, Ollama wraps the entire lifecycle -- model discovery, download, weight management, GPU acceleration, and serving -- into a polished developer experience. Running a model is as simple as typing `ollama run deepseek-v4` or `ollama run qwen3-coder`, and the system handles everything from pulling the right quantization for your hardware to allocating GPU memory and launching an API server. With over 164,000 GitHub stars and 14,700+ forks, Ollama has become the default way developers interact with open-source language models on their own machines. The project builds on llama.cpp for its inference backend but adds critical infrastructure layers on top: a model registry with thousands of pre-packaged models, automatic hardware detection across NVIDIA CUDA, AMD ROCm, and Apple Metal backends, and a REST API server that runs on localhost:11434 by default. The API is compatible with both the OpenAI Chat Completions format and, as of v0.14.0, the Anthropic Messages API -- meaning tools like Claude Code, Codex, Droid, and OpenCode can connect directly to local Ollama instances without proxy layers. The model library is one of Ollama's strongest differentiators. It provides ready-to-run versions of DeepSeek, Qwen, Gemma, Kimi-K2.5, GLM-5, MiniMax, gpt-oss, Mistral, LLaMA, Phi, and dozens more families across a range of parameter sizes and quantization levels. As of early 2026, the library supports over 40,000 model integrations. Specialized models like GLM-OCR for document understanding and Qwen3-VL for vision tasks are available alongside general-purpose chat and coding models. The `ollama launch` command, introduced in v0.15, streamlines the setup of coding agents by automatically configuring environment variables and connecting your preferred development tool to a local or cloud-hosted model. Ollama runs cross-platform on macOS, Linux, and Windows, with official Docker images for containerized deployments. Installation is a one-liner on every platform: a shell script on Linux, a DMG on macOS, or a PowerShell command on Windows. On Apple Silicon, Metal acceleration is automatic with no driver installation required -- the unified memory architecture means your full system RAM is available as GPU memory. On NVIDIA systems, CUDA drivers 535+ are detected automatically. AMD GPU support is available through ROCm 6.0+ on Linux. Recent releases have added structured output support (constraining model responses to JSON schemas), a built-in web search API, NVFP4 and FP8 quantization for up to 35 percent faster token generation on supported hardware, and a redesigned desktop application with file drag-and-drop for document reasoning. The v0.17.6 release in March 2026 refined tool calling for Qwen 3.5 models and fixed GLM-OCR prompt rendering. The project also offers cloud-hosted inference for larger models like GLM-4.6 and Qwen3-coder-480B that exceed typical consumer hardware budgets. Ollama's ecosystem integration is vast. Over 100 third-party projects connect to it, spanning web UIs (Open WebUI, LibreChat), desktop applications (AnythingLLM, Dify, Jan), orchestration frameworks (LangChain, LlamaIndex, Spring AI, Semantic Kernel), and automation platforms (n8n). Native client libraries are available in Python and JavaScript, with community libraries covering Go, Rust, Java, and more.

164,982 stars

langgenius/dify

Dify is an open-source platform for building production-grade AI applications through a visual drag-and-drop workflow builder. Instead of writing boilerplate code to chain LLM calls, manage prompts, and wire up retrieval pipelines, developers lay out their logic on a canvas -- connecting model nodes, tool calls, conditional branches, and human-in-the-loop checkpoints into executable workflows. The result is a system that can go from prototype to production deployment without rewriting the orchestration layer. The platform supports hundreds of LLM providers out of the box: OpenAI GPT models, Anthropic Claude, Mistral, Llama 3, Qwen, and any provider exposing an OpenAI-compatible API. Switching between models is a dropdown change, not a code refactor. This provider-agnostic design means teams can start with a cloud API, benchmark against alternatives, and migrate to self-hosted models without touching their workflow logic. Dify ships with a full RAG pipeline built in. Upload PDFs, presentations, or plain text, and the system handles chunking, embedding, vector storage, and retrieval. Version 1.12.0 introduced Summary Index, which attaches AI-generated summaries to document chunks so semantically related content clusters together during retrieval. Version 1.13.0 added multimodal retrieval that unifies text and images into a single semantic space for vision-enabled reasoning. The agent capabilities layer supports both function-calling and ReAct-style reasoning with over 50 built-in tools including Google Search, DALL-E, Stable Diffusion, and WolframAlpha. Since v1.0.0, all models and tools have been migrated to a plugin architecture, so extending Dify with custom integrations no longer requires forking the core codebase. On the operational side, Dify includes a Prompt IDE for comparing model outputs side-by-side, LLMOps-grade logging and performance monitoring, and a Backend-as-a-Service API layer that lets frontend applications consume workflows through REST endpoints. The v1.13.0 release added a Human Input node that pauses workflows for human review, enabling approval gates and content moderation loops directly within automated pipelines. Deployment is flexible: Docker Compose for quick self-hosted setups (minimum 2 CPU cores, 4 GB RAM), Kubernetes via five community-maintained Helm charts, Terraform templates for Azure and Google Cloud, and AWS CDK for infrastructure-as-code deployments. Dify Cloud offers a managed option with 200 free GPT-4 calls to get started. Enterprise customers including Volvo Cars and Kakaku.com run Dify in production -- Kakaku.com reported 75% employee adoption with nearly 950 internal AI applications built on the platform.

132,700 stars

langchain-ai/langchain

LangChain is the most widely used open-source framework for building LLM-powered applications and autonomous agents. It provides a standardized, composable interface across model providers — OpenAI, Anthropic, Google, Mistral, and 50+ others — so you can swap models without rewriting your logic. With 130,000+ GitHub stars and over 277,000 dependent projects, it's the de-facto standard for production RAG pipelines, multi-agent systems, and agentic workflows.

129,548 stars

open-webui/open-webui

Open WebUI is the most popular self-hosted AI chat platform on GitHub, with 127,000+ stars. It runs entirely offline and connects to Ollama, OpenAI-compatible APIs, and dozens of other LLM backends — giving you a ChatGPT-like experience without sending your data to any cloud.

127,076 stars

Developer Courses

View all

Weekly AI Digest