Getting the sizes of Top level Directories in an AWS S3 Bucket with Boto3

I was recently asked to create a report showing the total files within the top level folders and all the subdirs under the folder in our S3 Buckets.

S3 bucket ‘files’ are objects that will return a key that contains the path where the object is stored within the bucket.
I came up with this function to take a bucket and iterate over the objects within the bucket. For each item, the key is examined and added to a running total kept in a dictionary.

Here’s what I ended up with.

def get_top_dir_size_summary(bucket_to_search):
    This function takes in the name of an s3 bucket and returns a dictionary
    containing the top level dirs as keys and total filesize and value.
    :param bucket_to_search: a String containing the name of the bucket
    # Setup the output dictionary for running totals
    dirsizedict = {}
    # Create 1 entry for '.' to represent the root folder instead of the default.
    dirsizedict['.'] = 0

    # ------------
    # Setup the AWS Res. and Clients
    s3 = boto3.resource('s3')
    s3client = boto3.client('s3')

    # This is a check to ensure a bad bucket name wasn't passed in.   I'm sure there is a better
    # way to check this.   If you have a better method, please comment on the article. 
        response = s3client.head_bucket(Bucket=bucket_to_search)
        print('Bucket ' + bucket_to_search + ' does not exist or is unavailable. - Exiting')

    # since buckets could have more than 1000 items, have to use paginator to iterate 1000 at a time
    paginator = s3client.get_paginator('list_objects')
    pageresponse = paginator.paginate(Bucket=bucket_to_search)

    # iterate through each object in the bucket through the paginator.
    for pageobject in pageresponse:

        # Check to see of a buckets has contents, without this an empty bucket would throw an error. 
        if 'Contents' in pageobject.keys():

            # if there are contents, then iterate through each 'file'.
            for file in pageobject['Contents']:
                itemtocheck = s3.ObjectSummary(bucket_to_search, file['Key'])

                # Get Top level directory from the file by splitting the key. 
                keylist = file['Key'].split('/')

                # See if file is on root, if keylist has 1 item (root dir), there are no dirs on item
                if len(keylist) == 1:
                    dirsizedict['.'] += itemtocheck.size
                    # Not root, check if key already exists, create it needed, and add value otherwise
                    # Just add the value to the running total
                    if keylist[0] in dirsizedict:
                        dirsizedict[keylist[0]] += itemtocheck.size
                        dirsizedict[keylist[0]] = itemtocheck.size

    return dirsizedict

That script is probably a little rough to an elite coder, so if you have any thoughts on improvement, let me hear them.

Using Python and Boto3 to get Instance Tag information

Here are 2 sample functions to illustrate how you can get information about Tags on instances using Boto3 in AWS.

import boto3

def get_instance_name(fid):
        When given an instance ID as str e.g. 'i-1234567', return the instance 'Name' from the name tag.
        :param fid:
    ec2 = boto3.resource('ec2')
    ec2instance = ec2.Instance(fid)
    instancename = ''
    for tags in ec2instance.tags:
        if tags["Key"] == 'Name':
            instancename = tags["Value"]
    return instancename

In this function, I create the ec2 resource object using the instance ID passed to the function. I iterate through the Tags of the instance until I find the ‘Name’ Tag and return its value. This is a very simple function that can pull any tag value, really.

Next up, this function will list all instances with a certain Tag name and certain Value on that tag.

import boto3

def list_instances_by_tag_value(tagkey, tagvalue):
When passed a tag key, tag value this will return a list of InstanceIds that were found.
:param tagkey:str
:param tagvalue:str
:return: list
ec2client = boto3.client('ec2')

response = ec2client.describe_instances(Filters=[
{'Name': 'tag:'+tagkey, 'Values': [tagvalue]}
instancelist = []
for reservation in (response["Reservations"]):
for instance in reservation["Instances"]:
return instancelist

Here I use 'response' to collect the instances which fall into the Filter used. Take note that I used the tag: + tagkey. tag-value would return any instance that this value on any tag. Tag-key returns any instance with the tag name field that matches here, regardless of the value. I want a specific tag and specific value.

Getting the Size of an S3 Bucket using Boto3 for AWS

I’m writing this on 9/14/2016. I make note of the date because the request to get the size of an S3 Bucket may seem a very important bit of information but AWS does not have an easy method with which to collect that info. I fully expect them to add that functionality at some point. As of this date, I could only come up with 2 methods to get the size of a bucket. One could list of all bucket items and iterate over all the objects while keeping a running total. That method does work, but I found that for a bucket with many thousands of items, this method could take hours per bucket.

A better method uses AWS Cloudwatch logs instead. When an S3 bucket is created, it also creates 2 cloudwatch metrics and I use that to pull the Average size over a set period, usually 1 day.

Here’s what I came up with:

import boto3
import datetime

now =

cw = boto3.client('cloudwatch')
s3client = boto3.client('s3')

# Get a list of all buckets
allbuckets = s3client.list_buckets()

# Header Line for the output going to standard out
print('Bucket'.ljust(45) + 'Size in Bytes'.rjust(25))

# Iterate through each bucket
for bucket in allbuckets['Buckets']:
    # For each bucket item, look up the cooresponding metrics from CloudWatch
    response = cw.get_metric_statistics(Namespace='AWS/S3',
                                            {'Name': 'BucketName', 'Value': bucket['Name']},
                                            {'Name': 'StorageType', 'Value': 'StandardStorage'}
    # The cloudwatch metrics will have the single datapoint, so we just report on it. 
    for item in response["Datapoints"]:
        print(bucket["Name"].ljust(45) + str("{:,}".format(int(item["Average"]))).rjust(25))
        # Note the use of "{:,}".format.   
        # This is a new shorthand method to format output.
        # I just discovered it recently. 

Using Python Boto3 with Amazon AWS S3 Buckets

I’m here adding some additional Python Boto3 examples, this time working with S3 Buckets.

So to get started, lets create the S3 resource, client, and get a listing of our buckets.

import boto3

s3 = boto3.resource('s3')
s3client = boto3.client('s3')

response = s3client.list_buckets()
for bucket in response["Buckets"]:

Here we create the s3 client object and call ‘list_buckets()’. Response is a dictionary and has a key called ‘Buckets’ that holds a list of dicts with each bucket details.

To list out the objects within a bucket, we can add the following:

    theobjects = s3client.list_objects_v2(Bucket=bucket["Name"])
    for object in theobjects["Contents"]:

Note that if the Bucket has no items, then there will be no Contents to list and you will get an error thrown “KeyError: ‘Contents’.

Each object returned is a dictionary with Key Value pairs describing the object. Boto3 Docs are you friend here:

Now if the Bucket has over 1,000 items, the list_objects is limited to 1000 replies. To get around this, we need to use a Paginator.

import boto3

s3 = boto3.resource('s3')
s3client = boto3.client('s3')

response = s3client.list_buckets()
for bucket in response["Buckets"]:
    # Create a paginator to pull 1000 objects at a time
    paginator = s3client.get_paginator('list_objects')
    pageresponse = paginator.paginate(Bucket=bucket["Name"])
    # PageResponse Holds 1000 objects at a time and will continue to repeat in chunks of 1000. 
    for pageobject in pageresponse:
        for file in pageobject["Contents"]:

How to use Python Boto3 to list Instances in Amazon AWS

Continuing on with simple examples to help beginners learn the basics of Python and Boto3.

This is a very simple tutorial showing how to get a list of instances in your Amazon AWS environment.

import boto3    
ec2client = boto3.client('ec2')
response = ec2client.describe_instances()
for reservation in response["Reservations"]:
    for instance in reservation["Instances"]:
        # This sample print will output entire Dictionary object
        # This will print will output the value of the Dictionary key 'InstanceId'

You can also create a resource object from the instance item as well.
for instance in reservation[“Instances”]
ec2 = boto3.resource(‘ec2’)
specificinstance = ec2.Instance(instance[“InstanceId”])

Getting started with Amazon AWS and Boto3

If you are starting from scratch with using Python and Boto3 to script and automate Amazon AWS environments, then this should help get you going.

To begin, you’ll need a few items:
1) Download and install the latest Amazon AwsCli. The is the Command Line client for AWS.
2) You will need credentials for an Amazon AWS environment of course. There are lot’s of articles on how to setup AWS CLI with an AWS account.
3) Download and install the latest Python Release from For these examples I’m using Python 3.5.2.
4) You will need to ‘pip install boto3’ in a Python environment. You can do this from command line. I recommend creating a Virtual Environment for AWS-Boto3 to keep packages separate.
5) If you’ve never used a Python IDE before, try the free Pycharms Community Edition. I use it and it really helps speed up your coding.

At this point, you should have a working AWS CLI, Python Intepreter, and have pip installed the boto3 library.

It’s a good idea to keep the boto3 documentation handy in another browser tab. Because of how boto3 works, there is no premade library and, when working in pycharms, no autocomplete. Having the docs available to reference the objects and methods saves a lot of time.

In my experience with Boto3, there resources and there are clients.

import boto3
ec2 = boto3.resource('ec2')
ec2client = ec2.client('ec2')

I use the resource to get information or take action on a specific item. I think of it as being at a ‘higher’ level than the client. When using the boto3 resource, you usually have to provide an id of the item. For example:

import boto3
ec2 = boto3.resource('ec2')
vpc = ec2.Vpc('vpc-12345678')

There is usually no searching or enumerating items at the resource level. Now that the class vpc is defined, you can look at the attributes, references, or collections. You will also have a series of actions you can perform on the item.

# Attributes
# Actions
# Collections
vpclist = vpc.instances.all()
for instance in vpclist:

When using a client instead of a resource, you get a level of control over AWS items that is very close to the AWS CLI interface. With the client, you get more detail, can search, and can get very granular in your filters and tasks. Almost every resource has a client available.
For example:

import boto3
# Ec2
ec2 = boto3.resource('ec2')
ec2client = ec2.client('ec2')
# S3
s3 = boto3.resource('s3')
s3client = s3.client('s3')

One of the most useful benefits of using a client is that you can describe the AWS items in that resource, you can filter or iterate for specific items, and manipulate or take actions on those items.
In the VPC example above, when defining the VPC resource, we needed to know the ID of the vpc. With a client, you can list all the items and find the one you want.

import boto3
ec2 = boto3.resource('ec2')
ec2client = ec2.client('ec2')
response = ec2client.describe_vpcs()

What you get back is a dictionary with the important item called “Vpcs”. This dictionary item is the output containing all the VPCs. (We could expect this from the Boto3 docs).
So let’s list out all the VPCs from the response

import boto3
ec2 = boto3.resource('ec2')
ec2client = ec2.client('ec2')
response = ec2client.describe_vpcs()
for vpc in response["Vpcs"]:

Now you can see how it starts to break down into the info you want. Here you can see a list of dictionaries from the Vpc.
Next, lets get to the individual items of each VPC.

import boto3
ec2 = boto3.resource('ec2')
ec2client = ec2.client('ec2')
response = ec2client.describe_vpcs()
for vpc in response["Vpcs"]:
    print("VpcId = " + vpc["VpcId"] + " uses Cidr of " + vpc["CidrBlock"])

That’s the very basics of it. Post any questions in the comments below. And I’ll be adding more posts on Boto3 in the coming days and weeks.

Chaining Operations and Operators in Linux

This is my list of my most used chaining operators in Ubuntu.

1 .   Semi-colon operator ;

The semi-colon allows you to chain multiple commands so that they run in-order.

# apt-get update ; apt-get upgrade ; echo 'upgrades are done'


2 . Single Ampersand &

The single ampersand will execute a command in the background and can chain commands to run in the background. You use the command followed by a space and the ampersand per command.
To run a single command:

# ping &

To run more than 1 command in the background:

# ping & cp ~/* . & apt-get update &


3 . Double Ampersand AND Operator &&

The && symbol, also called the AND Operator, links and executes commands in order only if the previous command is successful.
Technically a command is successful if it completes with exit status 0.
For example, I want to create a directory and file, but I only want to create the file is the directory is created correctly.

# mkdir ~/test && touch ~/test/tempfile1


4. Single PIPE |

The PIPE operator is used when you want the output of one command to be the input of a following command.
For example, this will list installed packaged then search for lines with ‘java’.

# dpkg -l | grep java


5. OR Operator ||

The OR operatorm || is similar to the AND operator, only here it will execute the following command only if the previous command failed. A command fails if it exits with status code 1.

# mkdir ~/test || echo 'The command failed'


6. NOT Operator !

The NOT operator ! is used in a command to identify those items that should be exempt from the command.
For example, imagine a directory with various filetypes and you wanted to remove all files except the PDFs.

# rm -r !(*.pdf)  


7. AND OR Operator && ||

This combination of the AND && and the OR || operators delivers what is basically an if-else statement based on the exit status code of the 1st command. BASH Shell has other command to get an if-else result, but this is using just the Operators.
For Example:

# mkdir ~/testdir && echo 'Directory Created' || echo 'Directory Creation Failed'


8. Precedence ()

When using && and ||, the exit codes determine whether or not to execute the following commands. Also, it is important to understand that the && and || only evaluate the 2 commands preceding and following the operators. So when using multiple operators, setting groups and precedence comes in handy when you want to ensure groups of commands complete or fail in a certain way.
In this example, Both commands in the 1st set () must exit with 0 in order for the next () set to execute.

# (ls *.pdf > pdf-files.list && cp *.pdf ~\) && (ls *.tar > tar-tiles.list && cp *.tar ~\) || echo 'Needs attention' 



Precedence can become very unreadable very quickly. I prefer using BASH’s IF THEN ELSE commands. These work just like any programming languange… IF something is true, THEN execute this command, otherwise (ELSE) run this command. Note that in BASH the if is concluded with ‘fi’.
There are pages of options for IF THEN ELSE which you should explore.
For example, this is a very basic example:

# if ls *.pdf ; then echo 'There are PDFs here' ; else echo 'There are no PDFs here' ; fi 

How to Install OwnCloud to Ubuntu 14.04 LTS

This quick how-to steps you through a simple installation of OwnCloud to a Ubuntu 14.04 server.

First, you need some prereqs:

 sudo apt-get install php5-gd mysql-server

To begin, you need to add the repository for ownCloud for ubuntu 14.04.

wget -nv -O Release.key
apt-key add - < Release.key

Next, update the lists and install the package.

sh -c "echo 'deb /' >> /etc/apt/sources.list.d/owncloud.list"
apt-get update
apt-get install owncloud

Once the package is installed, access the ownCloud interface at http://SERVERNAME/owncloud

The first time you launch it, it will prompt you to create an admin id and password. Optionally, you can pick the Data folder location and choose MySQL vs SQLite.

owncloud setup 1404


After you create the admin user and sign into the OwnCloud Interface, if you are installing this for home use, you will probably want to enable some basic plug-ins.   If you plan on syncing calendars and contacts, then you will need at least those 2 add-ons.

Click on the Files entry on the upper left for the pull down menu shown below.   Click on the + to add new apps.

owncloud setup 1404

owncloud setup 1404


Select ‘Productivity’ from the left hand menu and Enable the Calendar and Contacts Applications.

owncloud setup 1404

owncloud setup 1404


Once completed, you will be taken to the web interface. Here you can add users and adjust settings as needed. I create the users here and that will pretty much complete the basic install.

ownCloud Admin menu

Last thing to do is load a desktop client available from ownCloud’s web page

Next Steps that you should consider:
1) Enforce https connectivity to the owncloud, this is done through the admin menu selection.
2) Turn on Antivirus. Enabling this app in the Admin->apps menu sets up ClamAV to scan all uploaded content.
3) Make sure you are backing up the Owncloud Data Directory and the MySQL database.


There are additional options here for LDAP authentication, email alerts, and much more beyond this basic setup. Explore the add-on applications as well.

How to Install Mylar for use with SABnzbd on Ubuntu Linux

In other posts, I have installed SabNZBD, Sickbeard, and Couchpotato for auto downloading of TV and Movies. In this case, I wanted to use the same process to download Comic Book Files, CBRs, automatically based on created lists, have SABnzbd download them, and have them moved to a permanent folder for viewing access.

Mylar is a great solution. It’s an open-source project maintained by evilhero and works very well and is still under active development.

Before you start, make sure you have a working SABnzbd install on the Host.

Let’s get the pre-reqs out of the way also:

sudo apt-get install git-core

Download the latest copy of Mylar. I like using the /opt directory for full contained applications. Mylar is still in active development so I opt to have the latest development branch downloaded via git.

cd /opt
git clone -b development

To have Mylar start automatically, copy out the /opt/mylar/mylar.init.d file to /etc/init.d, make it executable, and add it to auto start.

sudo cp /opt/mylar/init-scripts/ubuntu.init.d /etc/init.d/mylar
sudo chmod +x /etc/init.d/mylar
sudo update-rc.d mylar defaults

Create the user account for mylar to use and optionally add it to a user group.

## Create the system only mylar user
sudo adduser --system --no-create-home mylar

## If you use my tutorials, add mylar to the nabd group so it can access the common Downloads folder.   Otherwise, you don't need this. 
sudo usermod -g nzbd mylar

Change the owner of the /opt/mylar directory to the newly created ‘mylar’ user

chown -R mylar:root /opt/mylar

Now create/edit the /etc/default/mylar file and add the following items:

# path to app
# user

Start the service

/etc/init.d/mylar start

Mylar should now be available from that host on port 8090 using http://SERVERNAME:8090 The web interface should load and provide you with additional config options.

Now I want to get mylar to work with SABnzbd.

Bring up mylar web interface and click on Settings in the upper right and then click on the Download Settings tab. Fill in the information for your SABnzbd host. Note that in the API field, I used the NZB API from SABnzbd. Mylar doesn’t need full control over SAB, it only needs to add NZBs to SAB. Fill in the fields with the values for your system. If you have been using my examples, the image below should match the values you would use.

Configure Mylar

Last thing to do in order to get the basics up and running is to add a search provider. Here I use a custom NZB indexer. Click on upper right, for settings, select the search providers tab, check the box for Use Newznab, and fill in your indexer’s details. Be sure to save your changes.

Mylar Newznab Config

Now lets get the Post-processing scripts ready. Locate the scripts in /opt/mylar/sabnzbd.   Copy the 1 file  to the directory you have configured in SAB to hold the processing scripts, I prefer to use a softlink instead of copying the file for easier upgrades.

ln -s /opt/mylar/post-processing/sabnzbd/ /opt/sabprocessingscripts/

In SAB, you will want to add a new category for the mylar downloads. I choose the category ‘comics’ but you can call it whatever you want, just so long as the mylar download category matches exactly.
Mylar Sab Config

With that, you should have a basic, working Mylar installed.

Troubleshooting Permissions
If you encounter problems with the creation, copying, renaming of any files, check your mylar account permissions. In this example, I added mylar to a group called nzbd. When Mylar creates new sub-directories for content, the Linux default is to create the directory with permissions of 0755 meaning that the group nzbd does not have write permissions. You can solve this issue in many ways.

Make sure that you have both Mylar and SABNzbd configured to create directories with permissions of 777 and 0777 respectively.

Mylar Permissions

Mylar Permissions

SABNzbd Permissions

To make life easy, if you already have an nzbd user for SABnzbd to use, you can just run mylar under the nzbd account instead of creating a mylar account. This way mylar and SABNzbd are both running with the same user permissions.   If you do this, you’ll want to use ‘chown -R nzbd:nzbd /opt/mylar’ and change the RUN_AS user from mylar to nzbd.

For the security conscious, you can implement ACLs in linux to handle the permissions on directories. This requires the installation and configuration of an ACL utility on your system.

How to Install Universal Media Server UMS on Ubuntu in Headless mode

Updated for Ubuntu 14.04

Universal Media Server is a fork off the very useful PS3 Media Server. And although the PS3MS was a great solution, it did have some shortcomings, especially with certain file formats or file containers. I tried UMS and loved it. It is easy to install and, at least for now, streams and transcodes every media file I have to support playback on any device including the PS3 and the sony SMP-N200 I use on other TVs.

So, working with any Ubuntu 14.04 server, here is my step by step to get UMS installed and working.

First you must have Java 7 JRE installed on the server. OpenJava will not work.

apt-get install software-properties-common
apt-get update
apt-get install openjdk-7-jre openjdk-7-headless

With Java installed, we now need to add some other pre-reqs:

apt-get install mediainfo dcraw vlc-nox mplayer mencoder

I’m going to use the /opt directory for the install. Then we download the latest UMS package from sourceforge. You can check the UMS webpage to find the latest version. As I write this, the latest is 5.2.3. After the download is complete, unpack the file with tar. I create a softlink using /opt/ums so that when we need to upgrade, we can just point the softlink to the new directory while not touching the config files that we will be using in /etc/ later on.

cd /opt
tar -xvzf UMS-5.2.3-Java7.tgz
ln -s /opt/ums-5.2.3 ums
rm UMS-5.2.3-Java7.tgz

Next we need to create the init.d script to auto start the app when the server boots, as well as have better control over the service.
We will create /etc/init.d/ums.

nano /etc/init.d/ums

Copy the following into the new file:

# Provides:          ums
# Required-Start:    $local_fs $remote_fs $network
# Required-Stop:     $local_fs $remote_fs $network
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Starts UMS program.
# Description:       Java Upnp Media Server dedicated to PS3

#set -x

# Author: Papa Issa DIAKHATE <>
DESC="Universal Media Server"
UMS_START=1 # Wether to start or not UMS ver at boot time.
DODTIME=30  # Time to wait for the server to die, in seconds.
            # If this value is set too low you might not
            # let the program to die gracefully and 'restart' will not work

test -x $DAEMON || exit 1

# Load the VERBOSE setting and other rcS variables
. /lib/init/

# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions

# Include ums defaults if available
if [ -f "/etc/default/$NAME" ] ; then
        . /etc/default/$NAME

# May we run the init.d script ?
[ $UMS_START = 1 ] || exit 1

# Some color codes
txtred=$'\e[0;31m' # Red
txtylw=$'\e[0;33m' # Yellow
txtrst=$'\e[0m'    # Text Reset
    echo >&2 -e ""$txtylw"Warning:$txtrst $1"
    pid=`pgrep -f 'java .*ums.jar.*'`
    running && { warnout "$NAME is already running !"; exit 0; }
    echo "Starting $DESC : $NAME"
    UMS_PROFILE="$UMS_PROFILE" start-stop-daemon --start --quiet --background --oknodo \
        --exec $DAEMON -- $DAEMON_OPTS

    running || { warnout "$NAME is NOT running !"; exit 0; }   
    local countdown="$DODTIME"
    echo -e "Stopping $DESC : $NAME \c "
    kill -9 $pid
    while running; do
        if (($countdown >= 0)); then
            sleep 1; echo -n .;
    # If still running, then try to send SIGINT signal
    running && { \
        echo >&2 "Using kill -s SIGINT instead"; \
        echo >&2 "If you see this message again, then you should increase the value of DODTIME in '$0'."; \
        kill -2 $pid; \

    if [ -e "/usr/share/ums/debug.log" ]; then
    while [ $count -ge 1 ]
    if [ -e "/usr/share/ums/debug.log.$count" ]; then
        mv "/usr/share/ums/debug.log.$count" "/usr/share/ums/debug.log.$plus"
    if [ -e "/usr/share/ums/debug.log" ]; then
        mv "/usr/share/ums/debug.log" "/usr/share/ums/debug.log.1"

    return 0
    running || { warnout "$NAME is NOT running !"; exit 0; }   
    echo "Stopping $DESC : $NAME"
    kill -9 $pid
    if [ -e "/usr/share/ums/debug.log" ]; then
    while [ $count -ge 1 ]
    if [ -e "/usr/share/ums/debug.log.$count" ]; then
   mv "/usr/share/ums/debug.log.$count" "/usr/share/ums/debug.log.$plus"
    if [ -e "/usr/share/ums/debug.log" ]; then
   mv "/usr/share/ums/debug.log" "/usr/share/ums/debug.log.1"
    echo -n " * $NAME is "
    ( running || { echo "NOT running "; exit 0; } )
    ( running && { echo "running (PID -> $(echo $pid))"; exit 0; } )
case "$1" in

        echo "Usage: $SCRIPTNAME {start|stop|force-stop|restart|force-restart|reload|force-reload|status}"
        exit 1

Now add execute permissions to the script and add the UMS script to update-rc.d

chmod +x /etc/init.d/ums
update-rc.d ums defaults

A sample conf file that you could use is at /opt/ums/UMS.conf and could be copied into /etc/UMS.conf and edited to fit your needs. It has all the configurable options and is probably more than most will need. You should also copy in the WEB.conf file as well to handle web streams if you use that functionality. (Thanks Wolfgang Hochweller)

cp /opt/ums/UMS.conf /etc/
cp /opt/ums/WEB.conf /etc/

Configuration is done to the /etc/UMS.conf file. At the very least you will want to add the location of the media to share.

folders=/mnt/media/tv, /mnt/media/movies, /mnt/media/music 

Pay attention to the following items, especially for those hosts with multiple NICs.


Now start UMS:yty

service ums start

That’s it, it should be running and advertising itself as UPNP/DLNA on the local network.