-
-
Notifications
You must be signed in to change notification settings - Fork 308
FAQ
- System
- Domain Name
- Cloudflare
- Cloud Storage
- Cloudbox Backup and Restore
-
Cloudbox Install
- Ansible Tags
- Error while fetching server API version
- 403 Client Error: Forbidden: endpoint with name <container name> already exists in network <network name>
- 500 Server Error: Internal Server Error: driver failed programming external connectivity on endpoint <container name> bind for 0.0.0.0:<port number> failed: port is already allocated
- Updating Cloudbox
- Docker
- Nginx Proxy
- Rclone
- Remote
- Plex
-
Plex Autoscan
- Newly downloaded media from Sonarr and Radarr are not being added to Plex?
- Plex Autoscan log shows error during empty trash request
- Plex Autoscan error with metadata item id
- Purpose of a Control File in Plex Autoscan
- Plex Autoscan Localhost Setup
- Why is SERVER_SCAN_DELAY set to 180 seconds by default?
- Cloudplow
- Sonarr / Radarr
- ruTorrent
- Nextcloud
- Misc
ARM is not supported.
-
Choose an X86 server (vs ARM).
-
Select "Ubuntu Xenial" as the distribution.
-
Click the server on the list.
-
Under "ADVANCED OPTIONS", click "SHOW".
-
Set "ENABLE LOCAL BOOT" to
off
. -
Click the "BOOTSCRIPT" link and select one above > 4.10.
-
Start the server.
Reference: https://www.scaleway.com/docs/bootscript-and-how-to-use-it/
If you are having issues upgrading the kernel on ovh, where the kernel upgrade is not taking effect..
uname -r
to see if you have grs
in kernel version string...
if so, see https://pterodactyl-daemon.readme.io/v0.4/docs/updating-ovh-kernel on how to update the kernel.
Use the following commands to find out your account's user name and group info:
id
or
id `whoami`
You'll see a line like the following:
uid=XXXX(yourusername) gid=XXXX(yourgroup) groups=XXXX(yourgroup)
-
Run the following commands line by line:
sudo useradd -m <username> sudo usermod -aG sudo <username> sudo passwd <username> sudo chsh -s /bin/bash <username> su <username>
How to check current shell:
echo $0
-sh
or
echo ${SHELL}
/bin/sh
Run this command to set bash as your shell (where <user>
is replaced with your username):
sudo chsh -s /bin/bash <user>
sudo reboot
-
Stop all docker containers
docker stop $(docker ps -a -q)
-
Change ownership of /opt. Replace
user
andgroup
to match yours' (see here).sudo chown -R user:group /opt
-
Change permission inheritance of /opt.
sudo chmod -R ugo+X /opt
-
Start all docker containers
docker start $(docker ps -a -q)
-
Run the
mounts
tagcd ~/cloudbox sudo ansible-playbook cloudbox.yml --tags mounts
blank
If you get this error during CB Install:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "API request not authenticated; Status: 403; Method: GET: Call: /zones?name=; Error details: code: 9103, error: Unknown X-Auth-Key or X-Auth-Email; "}
Make sure:
-
The
email
in settings.yml matches the one you have listed for your Cloudflare.com account. -
The
cloudflare_api_key
in settings.yml matches yourdomain
's Cloudflare Global API Key.
In short, no. Cloudbox does not come with encryption support out-of-box.
While there are pro's and cons for using either encrypted or unencrypted data on cloud services, we've decided to not deal with encryption for the out of box setup.
However, since Cloudbox uses Rclone VFS to mount cloud data, you can tweak the mounts and remotes to do this yourself. But doing so comes with no support/help from us.
Only app data located in /opt
and relevant config files (as listed below) are backed up. The backup script does this by creating tarball files (*.tar) for each folder in /opt
/ and placing them into your backup folder (as set in backup_config.yml
). The folders in /opt
are all backed up without regard for whether Cloudbox created them in the first place. For example, if you create /opt/bingbangboing
it will be backed up and restored by Cloudbox.
If you have set it up, the community repo is located in /opt
, so it will get backed up [this includes any changes you've made in that repo to the config or roles]. There is no catalog kept of what community roles you may have run, so none of the roles themselves will be run automatically on restore, but the data will be backed up and restored.
Service files from /etc/systemd/system
are synced to /opt/systemd-backup
as part of the backup, so they are included in the tarball creation. This includes things like the rclone_vfs
, mergerfs
, cloudplow
, plex_autoscan
, and other system service files. If you have added additional mounts and the like via your own service files [perhaps with tip #44 or samount
or the like], these extra service files will be backed up, but will not be automatically restored.
Torrent seeding content, NZBGet queue, anything in /mnt/
, /home/
, or anywhere else other than the /opt/
folder, will NOT be backed up (media files are moved to the cloud via Cloudplow, anyway). If you do want to backup your seeding data, checkout the scripts located in /opt/scripts/rclone/
folder.
If Rclone/Rsync are enabled, the backup will be uploaded to a remote destination.
If keep_local_copy
is enabled, the backup will remain locally in the backup folder; If NOT, the backup will be deleted. If you decide to disable Rclone/Sync, then at least have keep_local_copy
enabled, or else the backup will be created and then deleted right after.
The config files that are backed up are:
-
ansible.cfg
-
accounts.yml
-
settings.yml
-
adv_settings.yml
-
rclone.conf
-
backup_excludes.txt
(if one exists in thecloudbox
folder).
These files are kept separately from the backup tarball files to allow for easy access.
Note that the .ansible_vault
file is NOT backed up.
Nice table to see what is restored during simple backup/restore:
Items Backed UP |
Backed Up From |
Restored To |
---|---|---|
Application Data | /opt/ |
/opt/ |
Ansible Config | ~/cloudbox/ansible.cfg |
|
Account Settings | ~/cloudbox/accounts.yml |
|
Cloudbox Settings | ~/cloudbox/settings.yml |
|
Cloudbox Advanced Settings | ~/cloudbox/adv_settings.yml |
|
Backup Excludes List (custom) | ~/cloudbox/backup_excludes_list.txt |
~/cloudbox/backup_excludes_list.txt |
Rclone Config | ~/.config/rclone/rclone.conf |
~/.config/rclone/rclone.conf |
An optional service that allows for easy backing up and restoring of CLIENT-SIDE ENCRYPTED config files.
The config files that are backed up are:
-
ansible.cfg
-
accounts.yml
-
settings.yml
-
adv_settings.yml
-
backup_config.yml
-
rclone.conf
These files are the ones needed to run a successful restore.
Note: backup_excludes_list.txt
are not backed up into the Restore Service, simply because it is not important for a restore to work and also because it IS automatically restored during the restore process itself.
How does this work?
-
User fills in a username and password for Restore Service in the backup config.
-
During backup, config files are encrypted on the client-side, using a salt-hashed version of the username and password (your raw username is never sent to the Restore Service), and then uploaded to the Restore Service, which is located on
cloudbox.works
. -
When a user needs to restore their backup on a new box, they can pull their backed up config files from the Restore Service with a single command.
The source code for the Restore Service Scripts are listed below:
- https://github.com/Cloudbox/Cloudbox/blob/master/roles/backup/tasks/restore_service.yml (Backup Script)
- https://github.com/Cloudbox/cloudbox.github.io/blob/master/scripts/restore.sh (Restore Script)
Run multiple tags together by separating them with commas, no spaces. Quotes are optional. Order is not important.
Use this to install containers or roles that are not included in "default" install types.
Example:
sudo ansible-playbook cloudbox.yml --tags core,emby,sonarr,radarr,sonarr4k,radarr4k,nzbget,nzbhydra2
Skip tags you dont want to run by listing them with --skip-tags
and separated by commas. Quotes are optional. Order is not important.
Use this to skip containers or roles that are included in the "default" install types.
Example:
sudo ansible-playbook cloudbox.yml --skip-tags rutorrent,jackett
Note: But be careful on what you skip, as some things are needed by Cloudbox to function properly.
You can even merge --tags
and --skip-tags
into one command. Order is not important (e.g. skip tags can come before tags).
Example:
sudo ansible-playbook cloudbox.yml --tags core,emby,sonarr,radarr,sonarr4k,radarr4k,nzbget,nzbhydra2 --skip-tags rutorrent,jackett
Can also be used along with one of the "default" tags (e.g. cloudbox
).
Example:
sudo ansible-playbook cloudbox.yml --tags cloudbox,sonarr4k,radarr4k --skip-tags rutorrent,jackett
You can "permanently" skip tags by adding the following lines to ~/cloudbox/ansible.cfg
.
Format:
[tags]
skip = TAG1,TAG2,etc
And then continue to install with the normal --tags
command.
Example:
cat ~/cloudbox/ansible.cfg
[tags]
skip = rutorrent,jackett
sudo ansible-playbook cloudbox.yml --tags cloudbox,sonarr4k,radarr4k
In this example, the Cloudbox installer will install with all the default items, sonarr4k, and radarr4k, but will not install rutorrent and jackett.
Full error message:
Error Connecting: Error while fetching server API version: Timeout value connect was Timeout(connect=60, read=60, total=None), but it must be an int or float.
Run sudo pip install requests==2.10.0
and retry.
403 Client Error: Forbidden: endpoint with name <container name> already exists in network <network name>
Example:
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Error starting container 6fb60d4cdabe938986042e06ef482012a1d85a66a099d861f08062d8262c2ef7: 403 Client Error: Forbidden (\"{\"message\":\"endpoint with name jackett already exists in network bridge\"}\")"}
to retry, use: --limit @/home/seed/cloudbox/cloudbox.retry
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1
You have a remnant of the container in the Docker's network.
You can verify with the command below (replace <network name>
and <container name>
is replaced with the network name and container name mentioned in the error, respectively):
docker inspect network <network name> | grep <container name>
To remove the remnant, run this command and try again:
docker network disconnect -f <network name> <container name>
500 Server Error: Internal Server Error: driver failed programming external connectivity on endpoint <container name> bind for 0.0.0.0:<port number> failed: port is already allocated
sudo service docker stop
sudo service docker start
Follow the appropriate steps for your branch from this page
If you get any errors during git pull
, you will need to reset the Cloudbox git folder (i.e. ~/cloudbox/
). This will not reset your accounts.yml
, settings.yml
, adv_settings.yml
, or ansible.cfg
files.
-
If you are on the
master
branch (default):cd ~/cloudbox git reset --hard origin/master
-
If you are on the
develop
branch:cd ~/cloudbox git reset --hard origin/develop
(1) keeps all Cloudbox containers organized under one network; and (2), bridge network does not allow network aliases.
source: https://forums.docker.com/t/what-to-do-when-all-docker-commands-hang/28103/5
You can view the status via looking at the log for the letsencrypt
container.
docker logs -f letsencrypt
And see if the issues below apply to you..
/etc/nginx/certs/lidarr.domain.com /app
Creating/renewal lidarr.domain.com certificates... (lidarr.domain.com)
2019-10-16 22:56:00,081:INFO:simp_le:1479: Generating new certificate private key
2019-10-16 22:56:00,428:ERROR:simp_le:1446: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v01.api.letsencrypt.org/acme/authz-v3/X
Challenge validation has failed, see error log.
Your server is blocking access to Lets Encrypt so that it is unable to verify your subdomain and issue you a certificate.
This can happen when hosting from home and/or it is behind a firewall/router.
Unblock access to 443/80 to the outside world and run the following command:
docker exec letsencrypt /app/force_renew
This happens when SSL certificates have not been issued yet.
You may even see too many registrations for this IP
in the log (like below)...
2017-11-30 03:35:41,847:INFO:simp_le:1538: Retrieving Let's Encrypt latest Terms of Service.
2017-11-30 03:35:42,817:INFO:simp_le:1356: Generating new account key
ACME server returned an error: urn:acme:error:rateLimited :: There were too many requests of a given type :: Error creating new registration :: too many registrations for this IP
Just give it some time (days to hours) and it will resolve itself.
Creating/renewal request.domain.com certificates... (request.domain.com)
2017-12-02 07:34:44,167:INFO:simp_le:1538: Retrieving Let's Encrypt latest Terms of Service.
2017-12-02 07:34:45,331:INFO:simp_le:1356: Generating new account key
2017-12-02 07:34:46,853:INFO:simp_le:1455: Generating new certificate private key
ACME server returned an error: urn:acme:error:rateLimited :: There were too many requests of a given type :: Error creating new cert :: too many certificates already issued for: domain.com
You're limited to 50 new certificates, per registered domain, per week.
Visit https://letsencrypt.org/docs/rate-limits/ for more info.
2017-11-30 03:35:37,729:INFO:simp_le:1538: Retrieving Let's Encrypt latest Terms of Service.
2017-11-30 03:35:40,256:INFO:simp_le:1455: Generating new certificate private key
2017-11-30 03:35:41,406:ERROR:simp_le:1421: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If you haven't setup correct CAA fields or if your DNS provider does not support CAA, validation attempts after september 8, 2017 will fail. Failing authorizations: https://acme-v01.api.letsencrypt.org/acme/authz/XXXXXXXXXX
Challenge validation has failed, see error log.
-
Make sure your domain registrar is pointing to the correct server IP address. You can verify this by pinging it (
ping yourdomain.com
). -
Make sure you used the correct domain address in accounts.yml.
-
Check status of letsencrypt certs:
docker exec letsencrypt /app/cert_status
-
Check letsencrypt logs:
docker logs -f letsencrypt
-
Check nginx-proxy logs:
docker logs -f nginx-proxy
-
If you see
ERR_TOO_MANY_REDIRECTS
, disable Cloudflare CDN/Proxy. -
If nothing pops out, check the logs for the docker container:
docker logs -f --tail 100 containername
See if it failed to start, terminated with a kill command, or misc errors.
-
See if you can load it up via ngrok tunnelling,
ngrok http PORTNUMBER
Visit the ngrok.io url it generates.
-
Make sure you PC's DNS is updated.
Rclone error: Failed to save config file: open /home/<user>/.config/rclone/rclone.conf: permission denied
Replace user
and group
to match yours' (see here).
sudo chown -R user:group ~/.config/rclone/
sudo chmod -R 0755 ~/.config/rclone/
See Basics: Cloudbox Paths and Prerequisites: Cloud Storage. Remember folder names mentioned throughout the site are CASE SENSITIVE.
The current default used for mounting cloud storage is Rclone VFS:
sudo systemctl status rclone_vfs
If you are using Rclone Cache:
sudo systemctl status rclone_cache
If you are using Plexdrive 4:
sudo systemctl status plexdrive4
If you are using Plexdrive 5:
sudo systemctl status plexdrive5
You may resolve this by either
-
Installing Cloudbox again (do this for new Plex DBs/installs):
-
THIS WILL DELETE ANY EXISTING PLEX CONFIGURATION SUCH AS LIBRARIES
-
Remove Plex Container (it may show "Error response from daemon: No such container" if not created yet):
sudo docker rm -f plex
-
Remove the Plex folder:
sudo rm -rf /opt/plex
-
Reinstall the Plex container by running the following command in
~/cloudbox
:sudo ansible-playbook cloudbox.yml --tags plex
-
-
Installing Cloudbox again (do this for existing Plex DBs/installs):
-
THIS WILL LEAVE ANY EXISTING PLEX LIBRARIES AND METADATA INTACT
-
Remove Plex Preferences file.
sudo rm "/opt/plex/Library/Application Support/Plex Media Server/Preferences.xml"
-
Reinstall the Plex container by running the following command:
cd ~/cloudbox && sudo ansible-playbook cloudbox.yml --tags plex
-
-
Using SSH Tunneling to log into Plex and set your credentials:
-
On your host PC (replace
<user>
with your user name and<yourserveripaddress>
with your serveripaddress - no arrows):ssh <user>@<yourserveripaddress> -L 32400:0.0.0.0:32400 -N
This will just hang there without any message. That is normal.
-
In a browser, go to http://localhost:32400/web.
-
Log in with your Plex account.
-
On the "How Plex Works" page, click “GOT IT!”.
-
Close the "Plex Pass" pop-up if you see it.
-
Under "Server Setup", you will see "Great, we found a server!". Give your server a name and tick “Allow me to access my media outside my home”. Click "NEXT".
-
On "Organize Your Media", hit "NEXT" (you will do this later). Then hit "DONE".
-
At this point, you may
Ctrl + c
on the SSH Tunnel to close it.
-
Reorder the Plex agents for TV/Movies so that local assets are at the bottom.
Replace user
and group
to match yours' (see here).
sudo chown -R user:group /opt/plex/Library/Logs
sudo chmod -R g+s /opt/plex/Library/Logs
Note: If you have a separate Plex and Feeder setup, this will be done on the server where Plex is installed.
-
Test another download and run the following command:
tail -f /opt/plex_autoscan/plex_autoscan.log
-
If you see this...
terminate called after throwing an instance of 'boost::filesystem::filesystem_error' boost::filesystem::create_directories: Permission denied: "/config/Library/Logs"
There is an issue with the permissions on that folder that you'll need to fix manually (Cloudbox can't fix this as Plex creates this folder after the first scan)
To fix this, Run the following command. Replace
user
andgroup
to match yours' (see here).docker stop plex sudo chown -R user:group /opt/plex docker start plex
Example of a successful scan:
2017-10-10 17:48:26,429 - DEBUG - PLEX [ 6185]: Waiting for turn in the scan request backlog... 2017-10-10 17:48:26,429 - INFO - PLEX [ 6185]: Scan request is now being processed 2017-10-10 17:48:26,474 - INFO - PLEX [ 6185]: No 'Plex Media Scanner' processes were found. 2017-10-10 17:48:26,474 - INFO - PLEX [ 6185]: Starting Plex Scanner 2017-10-10 17:48:26,475 - DEBUG - PLEX [ 6185]: docker exec -u plex -i plex bash -c 'export LD_LIBRARY_PATH=/usr/lib/plexmediaserver;/usr/lib/plexmediaserver/Plex\ Media\ Scanner --scan --refresh --section 1 --directory '"'"'/data/Movies/Ravenous (1999)'"'"'' 2017-10-10 17:48:33,712 - INFO - UTILS [ 6185]: GUI: Scanning Ravenous (1999) 2017-10-10 17:48:33,959 - INFO - UTILS [ 6185]: GUI: Matching 'Ravenous' 2017-10-10 17:48:38,556 - INFO - UTILS [ 6185]: GUI: Score for 'Ravenous' (1999) is 117 2017-10-10 17:48:38,607 - INFO - UTILS [ 6185]: GUI: Requesting metadata for 'Ravenous' 2017-10-10 17:48:38,705 - INFO - UTILS [ 6185]: GUI: Background media analysis on Ravenous 2017-10-10 17:48:39,201 - INFO - PLEX [ 6185]: Finished scan!
ERROR - PLEX [10490]: Unexpected response status_code for empty trash request: 401
You need to generate another token and re-add that back into the config. See Plex Autoscan.
Example Log:
2017-11-21 04:26:32,619 - ERROR - PLEX [ 7089]: Exception finding metadata_item_id for '/data/TV/Gotham/Season 01/Gotham - S01E01 - Pilot.mkv':
2017-11-21 04:26:32,619 - INFO - PLEX [ 7089]: Aborting analyze of '/data/TV/Gotham/Season 01/Gotham - S01E01 - Pilot.mkv' because could not find a metadata_item_id for it
Possible Issues:
-
One of the mounts has changed (e.g. Rclone_VFS/Plexdrive or MergerFS/UnionFS was restarted).
-
Permission issues (see here).
Solution 1:
-
Make sure the remote mount is working OK (pick the relevant one below).
The current default used for mounting cloud storage is Rclone VFS:
sudo systemctl status rclone_vfs
If you are using Rclone Cache:
sudo systemctl status rclone_cache
If you are using Plexdrive 4:
sudo systemctl status plexdrive4
If you are using Plexdrive 5:
sudo systemctl status plexdrive5
-
Make sure the union mount is working OK.
The current default used for creating the union mount is MergerFS:
sudo systemctl status mergerfs
If you are using UnionFS:
sudo systemctl status unionfs
-
Restart Plex:
docker stop plex && docker start plex
Solution 2:
If all else fails, disable analyze in config.
-
Open
/opt/plex_autoscan/config/config.json
nano /opt/plex_autoscan/config/config.json
-
Make the following edit:
"PLEX_ANALYZE_TYPE": "off",
-
Restart Plex Autoscan
sudo systemctl restart plex_autoscan
Every time Sonarr or Radarr downloads a new file, or upgrades a previous one, a request is sent to Plex via Plex Autoscan to scan the movie folder or TV season path and look for changes. Since Sonarr and Radarr delete previous files on upgrades, the scan will cause the new media to show up in your Plex Library, however, the deleted files would be missing, and instead, marked as "unavailable" (i.e. trash icon). When the control file is present and the option in the Plex Autoscan config is enabled (default), Plex Autoscan will empty the trash for you, thereby, removing the deleted media from the library.
If the remote mount for you cloud storage provider (e.g. Google Drive) ever disconnected during a Plex scan of your media, Plex would mark the missing files as unavailable and emptying the trash would cause them to be removed out of the library. To avoid this from happening, Plex Autoscan checks for a control file in the unionfs path (i.e. /mnt/unionfs/mounted.bin)
before running any empty trash commands. The control file is just a blank file that resides on the root folder of your Rclone remote (i.e. cloud storage provider) and let's Plex Autoscan know that it is still mounted.
Once the remote is remounted, all the files marked unavailable in Plex will be playable again and Plex Autoscan will resume its emptying trash duties post-scan.
To learn more about Plex Autoscan, visit https://github.com/l3uddz/plex_autoscan.
TLDR: Plex Autoscan will not remove deleted media out of Plex without it.
If you are using an all-in-one Cloudbox and don't want to have the Plex Autoscan port open, you may set it up so that it runs on the localhost only.
To do so, follow these steps:
Plex Autoscan: (only if changed from default)
-
Open
/opt/plex_autoscan/config/config.json
nano /opt/plex_autoscan/config/config.json
-
Make the following edit:
"SERVER_IP": "0.0.0.0",
Note: This is the default config.
-
Restart Plex Autoscan
sudo systemctl restart plex_autoscan
Sonarr/Radarr:
-
Retrieve the 'Docker Gateway IP Address' by running the following:
docker inspect -f '{{ .NetworkSettings.Networks.cloudbox.Gateway }}' sonarr
-
Replace the Plex Autoscan URL with:
http://docker_gateway_ip_address:3468/yourserverpass
-
You Plex Autoscan URL will now look like this:
http://172.18.0.1:3468/yourserverpass
Alternatively, you can set it up this way:
Note: This method benefits from completely closing off Plex Autoscan to the outside.
Plex Autoscan:
-
Retrieve the 'Docker Gateway IP Address' by running the following:
docker inspect -f '{{ .NetworkSettings.Networks.cloudbox.Gateway }}' sonarr
-
Open
/opt/plex_autoscan/config/config.json
nano /opt/plex_autoscan/config/config.json
-
Make the following edit:
"SERVER_IP": "docker_network_gateway_ip_address",
-
This will now look like this:
"SERVER_IP": "172.18.0.1",
-
Restart Plex Autoscan
sudo systemctl restart plex_autoscan
Sonarr/Radarr:
-
Replace the Plex Autoscan URL with:
http://docker_gateway_ip_address:3468/yourserverpass
-
You Plex Autoscan URL will now look like this:
http://172.18.0.1:3468/yourserverpass
When Plex Autoscan gets a scan request from Sonarr, it tells Plex to scan the relevant TV Show season folder. So to avoid multiple Plex scans of the same season when more episodes of that same season come in, Plex Autoscan can wait (ala SERVER_SCAN_DELAY) and merge multiple scan requests into a single one. This is particularly noticeable when consecutive episodes are being downloaded/imported into Sonarr.
During this SERVER_SCAN_DELAY, if another request comes in for the same season folder, it will restart the delay timer again, thus allowing for even more time for new items to come in.
SERVER_SCAN_DELAY of 180 seconds was calculated with an average episode download time of a few minutes each.
There is no harm in multiple Plex scans of the same season folder, except for more busyness of Plex, and perhaps more stress to it, so this delay will try to alleviate that.
Alternative recommended settings are: 120 and 90 seconds.
If the activity log is stuck on:
2018-06-03 13:44:59,659 - INFO - cloudplow - do_upload - Waiting for running upload to finish before proceeding...
This means that an upload task was prematurely canceled and it left lock file(s) to prevent another upload.
To fix this, run this command:
rm -rf /opt/cloudplow/locks/*
or
sudo systemctl restart cloudplow
Cloudbox uses Sonarr's develop branch and Radarr's nightly branch during install. If you want to import an existing database that is on Sonarr's master branch or Radarr's develop branch (the two most stable branches), you should upgrade to those releases on a working installation first, make a backup, and then import into the respective folders (i.e. /opt/sonarr/
or /opt/radarr/
).
-
Stop the container:
docker stop rutorrent
-
Go into the folder where ruTorrent .htpasswd resides:
cd /opt/rutorrent/nginx
-
Remove the old .htpasswd:
rm .htpasswd
-
Generate a new .htpasswd (where
USER
is your preferred username):htpasswd -c .htpasswd USER
-
Verify that
/opt/rutorrent/nginx/nginx.conf
has the following lines:auth_basic "Restricted Content"; auth_basic_user_file /config/nginx/.htpasswd;
-
Start the container:
docker start rutorrent
-
Stop ruTorrent Docker container:
docker stop rutorrent
-
Edit the
rtorrent.rc
file:/opt/rutorrent/rtorrent/rtorrent.rc
-
Set the following options:
directory = /downloads/rutorrent
-
Start ruTorrent Docker container:
docker restart rutorrent
By default access to DHT, UDP, and PEX are disabled since most private trackers (and some server providers) do not allow this. Attempting to add a torrent from a public tracker would result in the torrent being stuck, like this:
To enable access to public trackers, do the following:
-
Stop ruTorrent Docker container:
docker stop rutorrent
-
Edit the
rtorrent.rc
file:/opt/rutorrent/rtorrent/rtorrent.rc
-
Set the following options:
dht.mode.set = on
trackers.use_udp.set = yes
protocol.pex.set = yes
-
Start ruTorrent Docker container:
docker start rutorrent
DB data is stored in /opt/mariadb and backedup along with Cloudbox Backup.
However, you can separately make a backup of the DB into a single nextcloud_backup.sql
file, by running the following command.
docker exec mariadb /usr/bin/mysqldump -u root --password=password321 nextcloud > nextcloud_backup.sql
And restoring it back:
cat nextcloud_backup.sql | docker exec -i mariadb /usr/bin/mysql -u root --password=password321 nextcloud
Python or script errors mentioning an issue with the config file is usually due to an invalid JSON format in the file.
Examples:
Traceback (most recent call last):
File "scan.py", line 52, in <module>
conf.load()
File "/opt/plex_autoscan/config.py", line 157, in load
cfg = self.upgrade(json.load(fp))
File "/usr/lib/python2.7/json/__init__.py", line 291, in load
**kw)
File "/usr/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 380, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting , delimiter: line 20 column 2 (char 672)
Traceback (most recent call last):
File "/opt/plex_autoscan/scan.py", line 52, in <module>
conf.load()
File "/opt/plex_autoscan/config.py", line 157, in load
cfg = self.upgrade(json.load(fp))
File "/usr/lib/python2.7/json/init.py", line 291, in load
**kw)
File "/usr/lib/python2.7/json/init.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
Traceback (most recent call last):
File "/usr/local/bin/cloudplow", line 60, in <module>
conf.load()
File "/opt/cloudplow/utils/config.py", line 227, in load
cfg, upgraded = self.upgrade_settings(json.load(fp))
File "/usr/lib/python3.5/json/__init__.py", line 268, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib/python3.5/json/__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.5/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.5/json/decoder.py", line 355, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 46 column 13 (char 1354)
Fixes:
-
Paste the JSON file at https://jsonformatter.curiousconcept.com/ and click
process
. This will tell you what the issue is and fix it for you.or
-
Run:
jq '.' config.json
If there are no issues, it will simply print out the full JSON file.
If there is an issue, a msg will display the location of the issue:
parse error: Expected separator between values at line 7, column 10
- Overview
- Presumptions
- Server
- Domain Name
- Cloudflare
- Cloud Storage
- Plex / Emby - Account
- Usenet vs. BitTorrent
Cloudbox
- Overview
- Dependencies (Choose only one of these)
- Settings
- Preinstall (Choose only one of these)
- SSH
- Ansible Vault
- Rclone
- Cloudbox (Choose only one of these)
- Application Setup
- Next Steps
Feederbox (do this first)
- Overview
- Dependencies
- Settings
- Preinstall
- SSH
- Ansible Vault
- Rclone
- Feederbox (Choose only one of these)
- Application Setup
- Next Steps
Mediabox
- Overview
- Dependencies
- Settings
- Preinstall
- SSH
- Ansible Vault
- Rclone
- Mediabox (Choose only one of these)
- Application Setup
- Next Steps
- Cloudplow (Media Uploader)
- cb utility script (Develop branch only}
- Updating Cloudbox (Choose only one of these)
- Updating Cloudbox Apps
- Removing Cloudbox Apps
- Resetting Cloudbox Apps
- Migrating Cloudbox
- Settings Updater
- Ansible Vault Primer
- Plex Access Token
- Plex Autoscan Extras
- Pushover
- Google Drive API Client ID and Client Secret
- Useful Docker Commands
- Add Your Own Docker Container into Cloudbox
- Revoking SSL Certificates
- Feeder Mount
- Adding a Subdomain
- HTTP Auth Support
- Emby
- Nextcloud
- Resilio Sync
- Plex DupeFinder
- Heimdall
- NZBHydra v1
- Plex Requests
- Sickbeard MP4 Automator
- SABnzbd
- Traktarr
See Community Wiki.