Dify is an open-source LLM application development platform. Its intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
Dify provides a visual orchestration approach. Instead of writing raw LangChain code, developers can drag and drop nodes to create complex AI workflows. It seamlessly integrates with top LLMs like OpenAI, Anthropic, Gemini, and local models via Ollama.
The easiest and officially recommended way to deploy Dify for production is using Docker Compose.
Dify releases updates frequently. To get the latest features and security patches, follow these standard Docker update procedures.
Dify supports dozens of model providers globally. You can use Hosted APIs or deploy Local Models.
Retrieval-Augmented Generation (RAG) allows your LLM to answer questions based on your private datasets (PDFs, Markdown, Notion, websites).
Workflows enable you to build complex multi-step AI agents. Below are the primary nodes you can drag and drop onto the canvas.
Equip your Agents with real-world tools. Dify supports seamless integration with various third-party services.
Dify provides "Backend-as-a-Service" API capabilities. To secure your endpoints, you must use Bearer Token authentication.
Send a message to the chatbot application and receive a response.
Common errors and how to resolve them.