Terug naar Archief

Why I Built a Shelly MCP Server (So I Could Migrate to Home Assistant)

9 min lezen

Shelly Local MCP Server

npm version npm downloads license github repo

Introduction

For my home automation I'm using Home Assistant because I don't want to be dependent on single vendor or lock myself in. And for my lighting I had one requirement: the lights should be dumb, and the switches smart. So I installed Shelly's around the house to control my lights. I setup the Shelly's before my Home Assistant, and needed to migrate the schedules. So I looked around for a Shelly MCP server that I could use to migrate things over. And lo and behold: there was none that suited my needs. So I took the opportunity to create one myself. In this blog I will share my experience so far.

Don't want to wait? You can find it here: https://github.com/jdgoeij/shelly-mcp-server

Getting started

The migration itself is always the painful part. Devices are one thing. Device settings, naming, rooms, and behavior are another.

I wanted a way to bridge that process with MCP tooling so I could inspect and operate devices with natural language and in a controlled way while migrating.

First I installed the Home Assistant MCP so I could direct my chat app towards my Home Assistant build. Then I loaded up VS Code and started prompting because I had no idea on how to start.

What I Built

The result is shelly-mcp-server.

It focuses on the practical stuff I needed during migration:

  • Discover Shelly Gen2+ devices on my LAN
  • Save and validate discovered devices
  • Control switches and covers
  • Run raw RPC calls when I need advanced commands

How I built it

First I added the Shelly API docs MCP to VS Code. Then I prompted:

Build a MCP server for my Shelly devices on my local network. Check the official documentation on which tools to include.

This gave me a ready to go MCP server quite quickly. After a bit of tuning and altering I had a working version in about 1 hour. However, I noticed that the device names I set in the Cloud portal weren't synced back to the devices. So now I had a list of 9 devices (it's not much, still working on more!) that had generic names. I decided to up my game and try to get that data from the cloud API. And oh boy, was I wrong...

The rabbit hole

For the cloud API to be used I needed credentials, but I wanted to use an elegant way. Environment variables, a separate script and hardcoded; I added the lot. I tested around a bit and decided I had a working MVP and I sent it to NPM (my first ever!) as shelly-mcp-server@0.1.0.

The next day I noticed it wasn't working at all. The API gave me error after error. I started optimizing because I found out the error was generic if you have a malformed payload. I loaded up cURL and fetched the data manually and corrected Copilot. That worked. Then I started my journey on obtaining the device names I set in the cloud so I could hard match my local data with the cloud data.

After an hour or two I still not had the device name/friendly name, and I already had:

  • Created a separate script that launched a login window to obtain a key when the user logged in -> easy login, yes sir.
  • Switched to OAuth to generate and use Bearer tokens -> even fancier way to login
  • Suppressed warnings and errors from the cloud enrichment
  • Reinstated the warnings and errors
  • Checked the data, updated devices, device config
  • Checked the developer tools in the browser to enumerate the badly documented Shelly API (or so I thought)

I was struck out of luck, and then it hit me: is the device name even fetched with the cloud API? So I created a simple script: get the data from the local device and from the Shelly Cloud API and save it in separate files. Then I manually compared the files and there it was: the data wasn't there and all I got was redundant data.

I sighed and just started clearing out the cloud enrichment feature as it was not helping. And quite frankly it makes the entire MCP server a lot smaller and easier to maintain.

This afternoon I pushed v.0.2.0 with local discovery only.

The manual part

Actually, the manual part was quite easy. I opened the Shelly Cloud Control Panel and from there I opened the local web interface of each Shelly, went to settings and renamed it. It took me 5 minutes at max. So far to the fancy 'cloud enrichment' I wanted to use. Lesson learned: immediately rename your Shelly when you adopt it and you have no issues.

How I migrated

With the MCP server running and connected to both my Shelly devices and Home Assistant, I could start the actual migration. In natural language, step by step, with Claude doing the heavy lifting.

Step 1: Discover and map devices

The first thing I did was run a full network scan to find all Shelly devices on my LAN. The server discovered 9 devices, validated connectivity, and saved them to devices.local.json. All reachable, zero issues.

Step 2: Read all schedules

Every Shelly device can run local schedules: timers and sun-based triggers that fire independently of any cloud or hub. I had set these up before Home Assistant was in the picture.

My chat flow was a bit like this:

Which lights have schedules? I want to import these into Home Assistant.

It used Schedule.List via RPC, and pulled all active jobs from each device. Seven out of nine returned clean results; two timed out and for those I assumed the same pattern as the others and flagged it in the automation description.

Next up:

Read the entity IDs from Home Assistant, plan your changes before applying. Use the friendly names so I know which devices you talk about.

I got a nice plan explaining which entity would get which schedule.

Step 3: Translate to Home Assistant automations

I triggered the 'migration' with a simple Execute!. 10 seconds later it was done. I got 8 automations. Rather than a one-to-one copy, I consolidated the 23 individual Shelly schedules into 8 automations. Claude asked me if I wanted to disable the schedules on the Shelly's themselves, which I confirmed. Wow, natural language migrations are a breeze like this!

One thing though: The automations could be consolidated more. So I started tidying things up:

Combine the outside lights in a single automation
For the lights in the kitchen, dinner room and living room you have duplicate automations, combine these as well

This gave me the end result of 6 automations.

The whole migration took like 10 minutes. What started as a tooling gap turned into a published npm package, a cleaner Home Assistant setup, and a few automations would've taken me much longer.

The next step is to automate my covers to follow the sun azimuth, retract screens when it rains etc.

Extra: Matching Shelly devices to Home Assistant entities without names

One thing I wanted to figure out: can I match Shelly devices on the LAN to their corresponding Home Assistant entities without relying on the device name? Names are fragile. They can differ between the Shelly app, the cloud portal, and whatever you called the entity when you set it up in HA.

It turns out the answer is yes, and the key is the MAC address.

Every Shelly device has a deviceId in the format <type>-<mac>, for example shellydimmerg3-dca4c9d412f0. The MAC is everything after the last dash. Home Assistant uses the same identifier when it registers the device through the Shelly integration. For devices that have not been renamed in HA, the MAC shows up directly in the entity ID: light.shellydimmerg3_dca4c9d412f0. For devices that have been given a friendly name, the MAC still appears in the device_tracker entity that HA creates alongside it, for example device_tracker.outside_lamp_front_porch_shelly1pmminig4_dca4c9d412f0.

So the matching strategy is:

  • Take the Shelly deviceId, strip everything up to and including the last dash, and lowercase the result. That gives you the MAC.
  • Search HA entity IDs for that MAC string. If it appears directly in a light.* or switch.* entity, you have your match.
  • If not, fall back to device_tracker.* entities. The friendly name always contains <devicetype>-<mac>, so the MAC will be there even if the primary entity has been renamed.

This covers all 9 of my devices without touching a single name. It also means that if someone renames a device in HA, the match still works. The MAC does not change.

For the MCP server, this opens up an interesting option: auto-discovery of the HA entity that corresponds to a Shelly device, purely based on network identity. No configuration file, no name mapping, no manual linking required.

So basically I could've built the MCP server and migrate the Shelly's withing two hours.

Tips for building your own MCP server

If this story made you want to build your own MCP server, here are three things I learned the hard way.

1. Think carefully about what you actually need

I spent hours building cloud enrichment I never needed. Before you start coding, write down the tools you want and why. The Shelly API docs MCP helped a lot here because I could ask what was actually available before building anything. Start with the use case, not the feature list.

2. Keep it simple

The best thing I did was delete code. Removing the cloud enrichment made the server smaller, faster to understand, and easier to maintain. If a feature does not directly support your goal, leave it out. You can always add it later, and you probably won't need to.

3. Building an MCP server is not that hard anymore

With the right docs connected as an MCP source and a capable model doing the scaffolding, you go from zero to working server in about an hour. The SDK handles the protocol, Zod handles validation, and the model handles the boilerplate. The hard part is not the code. It is knowing what you want to build.

Try It

If you want to test it yourself:

You can run it standalone, or install it via npx in your MCP client config.

If you have feature ideas, open an issue. I don't mind iterating while I continue my own Home Assistant migration.

Gerelateerde Berichten