
Deploying Your 24/7 AI Assistant from Scratch: A Complete Practical Guide to OpenClaw

By Sarah Jenkins


By Sarah Jenkins
The open-source project surpassed 150,000 stars in just two weeks. This AI assistant that can help you write code, reply to emails, and manage calendars has finally been deployed successfully after all my tinkering.
Recently, the tech world has been flooded by a “little lobster” — OpenClaw (formerly Clawdbot/Moltbot). As one of the fastest-growing open-source projects in GitHub history, it is unlike ChatGPT, which mainly just “talks.” Instead, it is an AI Agent that can actually take action: browse the web, execute commands, manage files, and even help you write code.
To be honest, the official documentation is not very friendly for users in China, and I ran into quite a few pitfalls during deployment. This article records the complete deployment process from 0 to 1, along with the Chinese-optimized materials I compiled, in hopes of helping you avoid unnecessary detours.
📚 Companion materials: I have organized the command collection and configuration file templates needed during deployment on my personal notes site fuye365.github.io, including practical content such as domestic mirror acceleration and API configuration guides.
Simply put, OpenClaw is a high-privilege AI agent running on your own server. Unlike SaaS-based AI services, it offers:
It is especially suitable for developers who need to automate repetitive work, such as regularly pulling data to generate reports, automatically replying to standardized inquiries, or remotely performing server maintenance.
Based on official requirements and my personal practical experience, I recommend preparing the following setup:
| Item | Minimum Requirement | Recommended Configuration | Notes |
|---|---|---|---|
| Operating System | Linux/macOS/Windows (WSL2) | Ubuntu 22.04 LTS | Preferred for cloud servers in China |
| Node.js | ≥ 22.x | 22.x LTS | Must be installed via NVM |
| Memory | 2GB | 4GB+ | Swap must be configured if below 4GB |
| AI Model | Any API Key | Qwen / GLM | Domestic large-model providers offer generous free quotas |
Special reminder: Access to GitHub/npm from servers in China may be unstable. It is recommended to configure a proxy or use mirror sources in advance.
Install Git and the Node.js environment. Here is one pitfall to avoid: do not install Node.js directly with apt, as the version may be incompatible.
# Install NVM (users in China are advised to use the Gitee mirror)
curl -o- https://gitee.com/RubyMetric/nvm-cn/raw/main/install.sh | bash
source ~/.bashrc
# Install Node.js 22
nvm install 22
nvm use 22
node -v # Confirm the output is v22.x.x
The official one-click installation script is provided, but it may get interrupted under China’s network conditions. If you run into issues, you can refer to the offline installation solution I compiled at the resource site mentioned at the end of this article.
# Official one-click installation (when the network is smooth)
curl -fsSL https://openclaw.ai/install.sh | bash
# Or install via npm (more stable)
npm install -g openclaw@latest
After installation, the first run will show an ASCII-art lobster and an interactive setup wizard:
░████░█░░░░░█████░█░░░█░███░░████░░████░░▀█▀
🦞 FRESH DAILY 🦞
Run openclaw onboard --install-daemon to enter setup. The key steps are as follows:
1. Security confirmation
The wizard will clearly warn you about the risks: the Agent can execute commands and read/write files. Enter yes to continue.
2. Choose the AI backend
For users in China, I strongly recommend selecting Qwen or GLM:
3. Configure the messaging channel
4. Install the daemon
Choosing the --install-daemon parameter will automatically create a system service so that it starts on boot.
After the basic deployment is complete, three key configurations are still needed before it can truly be put into production.
OpenClaw stores preferences in local Markdown files. Edit ~/.openclaw/openclaw.json:
{
"agent": {
"memory": {
"enabled": true,
"storagePath": "~/.openclaw/memory"
}
}
}
By default, the Agent has relatively high privileges. It is recommended to enable Docker sandbox isolation:
{
"agents": {
"defaults": {
"sandbox": {
"mode": "non-main",
"docker": {
"image": "openclaw-sandbox:bookworm-slim",
"network": "none"
}
}
}
}
}
Make sure the server allows port 18789 (the default Gateway port):
sudo ufw allow 18789/tcp
# Or configure it in your cloud server security group
Because of network environment differences, deployment in China requires a few “practical workarounds”:
Mirror acceleration: npm and Docker pulls often time out, so it is recommended to use domestic sources such as the Taobao NPM mirror and Alibaba Cloud Docker mirror.
Model selection: In my testing, Qwen2.5-coder performed very well on coding tasks, and Alibaba Cloud provides a generous free quota. Example configuration:
{
"agent": {
"model": "qwen2.5-coder:32b",
"providers": {
"qwen": {
"apiKey": "sk-your-key",
"baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1"
}
}
}
}
Mobile access: After deployment, integrating with Feishu or DingTalk allows you to remotely control your home or office computer from your phone, which is currently the most suitable usage pattern for users in China.
| Symptom | Cause | Solution |
|---|---|---|
| Installation script hangs | Network is unavailable | Switch to npm installation, or refer to the offline package solution at the resource site |
| Gateway fails to start | Port is occupied | Modify the gateway port configuration in openclaw.json |
| AI does not respond | Invalid API Key | Check whether the quota has been exhausted, or whether baseUrl is configured incorrectly |
| Chinese text displays as garbled characters | Terminal encoding issue | Set LANG=en_US.UTF-8 |
OpenClaw represents a turning point where AI Agents move from being “toys” to becoming real “tools.” Although the deployment process is more complicated than ChatGPT, what you get in return is complete data ownership and unlimited customization capability.
If you run into issues during deployment that are not mentioned in the documentation, feel free to discuss them. I have compiled the configuration file templates, mirror acceleration URLs, and step-by-step screenshots for creating bots on each platform for convenient reference at any time.
After all, configure it once, and benefit for a long time. When your AI assistant starts automatically handling emails at 3 a.m., you will thank yourself for taking the time to tinker with it now.
Reference resources:
About the author

Sarah Jenkins is a seasoned OpenClaw developer with a strong focus on optimizing high-performance computing solutions. Her work primarily involves crafting efficient parallel algorithms and enhancing GPU acceleration for complex scientific simulations. Jenkins is renowned for her meticulous attention to detail and her ability to translate intricate theoretical concepts into practical, robust OpenClaw implementations.