Recommendarr is a web application that generates personalized TV show and movie recommendations based on your Sonarr, Radarr, Plex, and Jellyfin libraries using AI.
- AI-Powered Recommendations: Get personalized TV show and movie suggestions based on your existing library
- Sonarr & Radarr Integration: Connects directly to your media servers to analyze your TV and movie collections
- Plex & Jellyfin Integration: Analyzes your watch history to provide better recommendations based on what you’ve actually watched
- Flexible AI Support: Works with OpenAI, local models (Ollama/LM Studio), or any OpenAI-compatible API
- Customization Options: Adjust recommendation count, model parameters, and more
- Dark/Light Mode: Toggle between themes based on your preference
- Poster Images: Displays media posters with fallback generation
- Sonarr instance with API access (for TV recommendations)
- Radarr instance with API access (for movie recommendations)
- Plex or Jellyfin instance with API access (for watch history analysis) – optional
- An OpenAI API key or any OpenAI-compatible API (like local LLM servers)
- Node.js (v14+) and npm for development
Using our pre-built Docker image is the quickest way to get started:
# Pull the image docker pull tannermiddleton/recommendarr:latest # Run the container docker run -d --name recommendarr -p 3030:80 tannermiddleton/recommendarr:latest
Then visit http://localhost:3030
in your browser.
For more Docker options, see the Docker Support section below.
- Clone the repository:
git clone https://github.com/fingerthief/recommendarr.git
cd recommendarr
- Install dependencies:
- Run the development server:
- Visit
http://localhost:3030
in your browser.
- When you first open Recommendarr, you’ll be prompted to connect to your services
- For Sonarr (TV shows):
- Enter your Sonarr URL (e.g.,
http://localhost:8989
orhttps://sonarr.yourdomain.com
) - Enter your Sonarr API key (found in Sonarr under Settings → General)
- Click “Connect”
- Enter your Sonarr URL (e.g.,
- For Radarr (Movies):
- Enter your Radarr URL (e.g.,
http://localhost:7878
orhttps://radarr.yourdomain.com
) - Enter your Radarr API key (found in Radarr under Settings → General)
- Click “Connect”
- Enter your Radarr URL (e.g.,
- For Plex (Optional – Watch History):
- Enter your Plex URL (e.g.,
http://localhost:32400
orhttps://plex.yourdomain.com
) - Enter your Plex token (can be found by following these instructions)
- Click “Connect”
- Enter your Plex URL (e.g.,
- For Jellyfin (Optional – Watch History):
- Enter your Jellyfin URL (e.g.,
http://localhost:8096
orhttps://jellyfin.yourdomain.com
) - Enter your Jellyfin API key (found in Jellyfin under Dashboard → API Keys)
- Enter your Jellyfin user ID (found in Jellyfin user settings)
- Click “Connect”
- Enter your Jellyfin URL (e.g.,
You can connect to any combination of these services based on your needs.
- Navigate to Settings
- Select the AI Service tab
- Enter your AI service details:
- API URL: For OpenAI, use
https://api.openai.com/v1
. For local models, use your server URL (e.g.,http://localhost:1234/v1
) - API Key: Your OpenAI API key or appropriate key for other services (not needed for some local servers)
- Model: Select a model from the list or leave as default
- Parameters: Adjust max tokens and temperature as needed
- API URL: For OpenAI, use
- Click “Save Settings”
- Navigate to TV Recommendations or Movie Recommendations page
- Adjust the number of recommendations you’d like to receive using the slider
- If connected to Plex, choose whether to include your watch history in the recommendations
- Click “Get Recommendations”
- View your personalized media suggestions with posters and descriptions
The easiest way to run Recommendarr:
# Pull the image docker pull tannermiddleton/recommendarr:latest # Run the container docker run -d --name recommendarr -p 3030:80 tannermiddleton/recommendarr:latest
If you want to build the Docker image yourself:
# Clone the repository git clone https://github.com/fingerthief/recommendarr.git # Navigate to the project directory cd recommendarr # Build the Docker image docker build -t recommendarr:local . # Run the container docker run -d --name recommendarr -p 3030:80 recommendarr:local
The repository includes a docker-compose.yml
file. Simply run:
# Clone the repository git clone https://github.com/fingerthief/recommendarr.git # Navigate to the project directory cd recommendarr # Start with docker-compose docker-compose up -d
This will build the image from the local Dockerfile and start the service on port 3030.
Recommendarr works with various AI services:
- OpenAI API: Standard integration with models like GPT-3.5 and GPT-4
- Ollama: Self-hosted models with OpenAI-compatible API
- LM Studio: Run models locally on your computer
- Anthropic Claude: Via OpenAI-compatible endpoints
- Self-hosted models: Any service with OpenAI-compatible chat completions API
Here are some recommendations for models that work well with Recommendarr:
- Meta Llama 3.3 70B Instruct: Great performance for free
- Gemini 2.0 models (Flash/Pro/Thinking): Excellent recommendation quality
- DeepSeek R1 models: Strong performance across variants
- Claude 3.7/3.5 Haiku: Exceptional for understanding your library preferences
- GPT-4o mini: Excellent balance of performance and cost
- Grok Beta: Good recommendations
11 Comments
freedomben
Looks super neat! A great idea as well.
Any plans to support jellyfin?
nickthegreek
Trakt.tv is the integration you need.
What is the largest library of watched media that this has been tested at? I can see this choking on media fanatics watch histories.
richjdsmith
This is really cool, and very well done! Would love to see it more on a per-user basis, as I share access with my family and do not have similar tastes at all. Perhaps tied in with Overseer API and Tautulli to see what users are requesting, then actually watching?
CharlesW
I'm excited to try this! I'd love support for music recommendations via Plex music libraries. (Currently, I use a script to export my music library to a format suitable for LLM analysis.)
phito
This sounds amazing, giving it a try right now
monkaiju
I'd love to see support for lidarr, I need way more help with music recommendations than TV/Movies
Nelkins
Cool project! Can you explain a little more about how the recommendation algorithm works?
hi_hi
Has there been any research on how LLMs perform as recommendation engines?
I'd assume there isn't any algorithms provided weighted comparisons based on my viewing habits, but rather a fairly random list that looks like its based on my viewing habits.
Perhaps, in practice, the difference between those two is academic, but I'm really not keen on leveraging such a heavy everything model for such a specific use case, when something much simpler, and private, would suffice.
silvanocerza
Cool project but why use an LLM for this?
Recommendation systems exist well before LLMs and have been in use for a while, wouldn't it better and more efficient even?
palakkadan
Just integrate Trakt
m0wer
Probably what you want is not an LLM but just the embeddings for clustering. It's much lighter and would work well with new material as well.
I've tested it out for filtering RSS feeds and has worked pretty well [1].
[1] https://github.com/m0wer/rssfilter