AI for Developers
Top coding tools, MCP servers, open-source repos, and tutorials curated for software engineers.
Top Coding Tools
View allCursor
The AI-first code editor built for pair programming with agents
Claude
The AI assistant built for serious thinking, coding, and complex work
Windsurf
The agentic IDE that keeps developers in flow with deep codebase understanding and autonomous multi-file editing.
Devin
The AI that ships PRs while you sleep — and 67% of them actually get merged
Replit
The cloud IDE where AI Agent 3 autonomously builds, tests, and deploys full-stack apps from plain English
Cline
5 million developers mass-installed a free Cursor alternative — and their API bills are still lower than $20/month.
Latest Developer Articles
View allYour Next Raise Isn't Cash -- Nvidia Wants to Pay You $250K in AI Tokens (Here's the Math That Doesn't Add Up)
Jensen Huang says a $500K engineer should burn $250K in AI tokens yearly. Nvidia calls it the 'fourth pillar' of tech comp. But tokens don't vest, don't appreciate, and can't pay rent. Here's the real dollar math.
Anthropic Accidentally Leaked Its Most Powerful Model. 3,000 Files Exposed the AI That Terrifies Cybersecurity Experts.
A CMS misconfiguration exposed 3,000 Anthropic files, including details on Claude Mythos — a new model tier above Opus with cybersecurity capabilities that the company itself calls 'far ahead of any other AI model.' Cybersecurity stocks dropped within hours.
Apple Just Ended ChatGPT's Siri Monopoly — iOS 27 Opens the Door to Claude, Gemini, and 5 More AI Chatbots
iOS 27 introduces Siri Extensions letting users swap ChatGPT for Claude, Gemini, or 5 other AIs. Apple's real play? Collecting 30% on every AI subscription across 2 billion devices.
ARC-AGI-3 Just Broke Every Frontier Model. Humans Score 100%. GPT-5.4 Scores 0.26%.
ARC-AGI-3 is the first interactive reasoning benchmark where humans score 100% and the best AI — Gemini 3.1 Pro — scores 0.37%. The largest human-AI gap in any mainstream benchmark reveals that scaling alone won't reach AGI.
Top Repos
View allopenclaw/openclaw
Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞
310,753 starsn8n-io/n8n
n8n is an open-source workflow automation platform that combines a visual, no-code interface with the flexibility of custom JavaScript and Python code execution. With over 150,000 GitHub stars as of March 2026, it has become one of the most popular automation tools in the open-source ecosystem, rivaling commercial platforms like Zapier and Make while offering complete data sovereignty. What sets n8n apart is its fair-code license model — the source code is always visible and available, self-hosting is free, and the platform can be extended without limits. For teams that need workflow automation but cannot send data to third-party services due to compliance requirements (HIPAA, GDPR, SOC 2), n8n's self-hosted option is a game-changer. The platform now includes native AI capabilities with built-in nodes for OpenAI, Anthropic, Google Gemini, and local model providers. Developers can build AI agent workflows that chain LLM calls with database queries, API integrations, and conditional logic — all through a visual canvas. This has made n8n a popular choice for building custom AI agents without writing complex orchestration code. n8n connects to over 400 services out of the box, including databases (PostgreSQL, MySQL, MongoDB), APIs (Slack, GitHub, Jira, Salesforce), and file storage (S3, Google Drive). Custom integrations can be built using the HTTP Request node or by creating community nodes in TypeScript. The workflow engine supports error handling, retries, sub-workflows, and webhook triggers for event-driven automation.
181,174 starsollama/ollama
Ollama is an open-source platform written in Go that makes running large language models locally as straightforward as a single terminal command. Where tools like llama.cpp expose the raw inference engine, Ollama wraps the entire lifecycle -- model discovery, download, weight management, GPU acceleration, and serving -- into a polished developer experience. Running a model is as simple as typing `ollama run deepseek-v4` or `ollama run qwen3-coder`, and the system handles everything from pulling the right quantization for your hardware to allocating GPU memory and launching an API server. With over 164,000 GitHub stars and 14,700+ forks, Ollama has become the default way developers interact with open-source language models on their own machines. The project builds on llama.cpp for its inference backend but adds critical infrastructure layers on top: a model registry with thousands of pre-packaged models, automatic hardware detection across NVIDIA CUDA, AMD ROCm, and Apple Metal backends, and a REST API server that runs on localhost:11434 by default. The API is compatible with both the OpenAI Chat Completions format and, as of v0.14.0, the Anthropic Messages API -- meaning tools like Claude Code, Codex, Droid, and OpenCode can connect directly to local Ollama instances without proxy layers. The model library is one of Ollama's strongest differentiators. It provides ready-to-run versions of DeepSeek, Qwen, Gemma, Kimi-K2.5, GLM-5, MiniMax, gpt-oss, Mistral, LLaMA, Phi, and dozens more families across a range of parameter sizes and quantization levels. As of early 2026, the library supports over 40,000 model integrations. Specialized models like GLM-OCR for document understanding and Qwen3-VL for vision tasks are available alongside general-purpose chat and coding models. The `ollama launch` command, introduced in v0.15, streamlines the setup of coding agents by automatically configuring environment variables and connecting your preferred development tool to a local or cloud-hosted model. Ollama runs cross-platform on macOS, Linux, and Windows, with official Docker images for containerized deployments. Installation is a one-liner on every platform: a shell script on Linux, a DMG on macOS, or a PowerShell command on Windows. On Apple Silicon, Metal acceleration is automatic with no driver installation required -- the unified memory architecture means your full system RAM is available as GPU memory. On NVIDIA systems, CUDA drivers 535+ are detected automatically. AMD GPU support is available through ROCm 6.0+ on Linux. Recent releases have added structured output support (constraining model responses to JSON schemas), a built-in web search API, NVFP4 and FP8 quantization for up to 35 percent faster token generation on supported hardware, and a redesigned desktop application with file drag-and-drop for document reasoning. The v0.17.6 release in March 2026 refined tool calling for Qwen 3.5 models and fixed GLM-OCR prompt rendering. The project also offers cloud-hosted inference for larger models like GLM-4.6 and Qwen3-coder-480B that exceed typical consumer hardware budgets. Ollama's ecosystem integration is vast. Over 100 third-party projects connect to it, spanning web UIs (Open WebUI, LibreChat), desktop applications (AnythingLLM, Dify, Jan), orchestration frameworks (LangChain, LlamaIndex, Spring AI, Semantic Kernel), and automation platforms (n8n). Native client libraries are available in Python and JavaScript, with community libraries covering Go, Rust, Java, and more.
164,982 starslanggenius/dify
Dify is an open-source platform for building production-grade AI applications through a visual drag-and-drop workflow builder. Instead of writing boilerplate code to chain LLM calls, manage prompts, and wire up retrieval pipelines, developers lay out their logic on a canvas -- connecting model nodes, tool calls, conditional branches, and human-in-the-loop checkpoints into executable workflows. The result is a system that can go from prototype to production deployment without rewriting the orchestration layer. The platform supports hundreds of LLM providers out of the box: OpenAI GPT models, Anthropic Claude, Mistral, Llama 3, Qwen, and any provider exposing an OpenAI-compatible API. Switching between models is a dropdown change, not a code refactor. This provider-agnostic design means teams can start with a cloud API, benchmark against alternatives, and migrate to self-hosted models without touching their workflow logic. Dify ships with a full RAG pipeline built in. Upload PDFs, presentations, or plain text, and the system handles chunking, embedding, vector storage, and retrieval. Version 1.12.0 introduced Summary Index, which attaches AI-generated summaries to document chunks so semantically related content clusters together during retrieval. Version 1.13.0 added multimodal retrieval that unifies text and images into a single semantic space for vision-enabled reasoning. The agent capabilities layer supports both function-calling and ReAct-style reasoning with over 50 built-in tools including Google Search, DALL-E, Stable Diffusion, and WolframAlpha. Since v1.0.0, all models and tools have been migrated to a plugin architecture, so extending Dify with custom integrations no longer requires forking the core codebase. On the operational side, Dify includes a Prompt IDE for comparing model outputs side-by-side, LLMOps-grade logging and performance monitoring, and a Backend-as-a-Service API layer that lets frontend applications consume workflows through REST endpoints. The v1.13.0 release added a Human Input node that pauses workflows for human review, enabling approval gates and content moderation loops directly within automated pipelines. Deployment is flexible: Docker Compose for quick self-hosted setups (minimum 2 CPU cores, 4 GB RAM), Kubernetes via five community-maintained Helm charts, Terraform templates for Azure and Google Cloud, and AWS CDK for infrastructure-as-code deployments. Dify Cloud offers a managed option with 200 free GPT-4 calls to get started. Enterprise customers including Volvo Cars and Kakaku.com run Dify in production -- Kakaku.com reported 75% employee adoption with nearly 950 internal AI applications built on the platform.
132,700 starslangchain-ai/langchain
LangChain is the most widely used open-source framework for building LLM-powered applications and autonomous agents. It provides a standardized, composable interface across model providers — OpenAI, Anthropic, Google, Mistral, and 50+ others — so you can swap models without rewriting your logic. With 130,000+ GitHub stars and over 277,000 dependent projects, it's the de-facto standard for production RAG pipelines, multi-agent systems, and agentic workflows.
129,548 starsopen-webui/open-webui
Open WebUI is the most popular self-hosted AI chat platform on GitHub, with 127,000+ stars. It runs entirely offline and connects to Ollama, OpenAI-compatible APIs, and dozens of other LLM backends — giving you a ChatGPT-like experience without sending your data to any cloud.
127,076 starsDeveloper Courses
View allAdvanced MCP Server Development — Build Production-Grade AI Tool Infrastructure
MCP servers are the new API economy. Every AI agent needs tools, and every tool needs an MCP server. This 7-lesson advanced course takes you from understanding the protocol internals to building, testing, securing, and deploying production MCP servers. You'll build a complete MCP server from scratch, implement OAuth authentication, write comprehensive tests, and deploy to multiple hosting platforms. By the end, you'll know how to build and sell MCP servers on the emerging marketplace ecosystem.
advancedAI Agents: From Concept to Production — Build What 95% of Developers Only Talk About
Go from zero to deploying production AI agents in 8 hands-on lessons. Master LangChain, LangGraph, and CrewAI with real Python code — tool calling, memory systems, multi-agent orchestration, guardrails, and cost control. Not theory. Working agents you can ship this week.
intermediatePrompt Engineering That Actually Works — 8 Techniques That Turn Any AI Into Your Expert Assistant
Stop getting mediocre AI responses. Learn the 8 prompt engineering techniques that separate power users from everyone else — with real, copy-paste examples you can use today with ChatGPT, Claude, Gemini, and any LLM.
beginner