Jellyfin supports hardware acceleration (HWA) of video encoding/decoding using FFMpeg. FFMpeg and Jellyfin can support multiple hardware acceleration implementations such as Intel Quicksync (QSV), AMD AMF, nVidia NVENC/NVDEC, OpenMax OMX and MediaCodec through Video Acceleration API's.
VAAPI is a Video Acceleration API that uses libva to interface with local drivers to provide HWA. QSV uses a modified (forked) version of VAAPI and interfaces it with libmfx and their proprietary drivers (list of supported processors for QSV).
|OS||Recommended HW Acceleration|
|Linux||QSV, NVENC, VAAPI|
|Windows||QSV, NVENC, AMF, VAAPI|
|MacOS||None (videotoolbox support coming)|
NVIDIA using ffmpeg official list. Not every card has been tested. These drivers are recommended for Linux and Windows. Here is the official list of NVIDIA Graphics Cards for supported codecs. Example of Ubuntu working with NVENC.
List of supported codecs for VAAPI.
AMF Linux Support still not official and AMD GFX Cards are required to use VAAPI on linux.
Zen is CPU only. No hardware acceleration for any form of video decoding/encoding. You need an APU or dGPU for hardware acceleration.
Intel QSV Benchmarks on Linux.
On Windows, you can use the DXVA2/D3D11VA libraries for decoding and the libmfx library for encoding.
CentOS may require additional drivers for QSV.
Here's additional information to learn more.
Enabling Hardware Acceleration
Hardware acceleration options can be found in the Admin Dashboard under the Transcoding section. Select a valid hardware acceleration option from the drop-down menu, indicate a device if applicable, and check
enable hardware encoding to enable encoding as well as decoding, if your hardware supports this.
The hardware acceleration is available immediately for media playback. No server restart is required.
Each hardware acceleration type, as well as each Jellyfin installation type, requires different setup options before it can be used. It is always best to consult the FFMpeg documentation on the acceleration type you choose for the latest information.
Acceleration on Docker
In order to use hardware acceleration in Docker, the devices must be passed to the container. To see what video devices are available, you can run
sudo lshw -c video or
vainfo on your machine.
NVIDIA GPUs aren't currently supported in docker-compose.
You can use
docker run to start the server with a command like the one below.
docker run -d \ --volume /path/to/config:/config \ --volume /path/to/cache:/cache \ --volume /path/to/media:/media \ --user 1000:1000 \ --net=host \ --restart=unless-stopped \ --device /dev/dri/renderD128:/dev/dri/renderD128 \ --device /dev/dri/card0:/dev/dri/card0 \ jellyfin/jellyfin
Alternatively, you can use docker-compose with a configuration file so you don't need to run a long command every time you restart your server.
version: "3" services: jellyfin: image: jellyfin/jellyfin user: 1000:1000 network_mode: "host" volumes: - /path/to/config:/config - /path/to/cache:/cache - /path/to/media:/media devices: # VAAPI Devices - /dev/dri/renderD128:/dev/dri/renderD128 - /dev/dri/card0:/dev/dri/card0 # RPi 4 - /dev/vchiq:/dev/qchiq
Configuring VAAPI acceleration on Debian/Ubuntu from
Configuring VAAPI on Debian/Ubuntu requires some additional configuration to ensure permissions are correct.
To check information about VAAPI on your system install and run
vainfo from the command line.
- Configure VAAPI for your system by following the relevant documentation. Verify that a
renderdevice is now present in
/dev/dri, and note the permissions and group available to write to it, in this case
$ ls -l /dev/dri total 0 drwxr-xr-x 2 root root 100 Apr 13 16:37 by-path crw-rw---- 1 root video 226, 0 Apr 13 16:37 card0 crw-rw---- 1 root video 226, 1 Apr 13 16:37 card1 crw-rw---- 1 root render 226, 128 Apr 13 16:37 renderD128
On some releases, the group may be
video instead of
- Add the Jellyfin service user to the above group to allow Jellyfin's FFMpeg process access to the device, and restart Jellyfin.
sudo usermod -aG render jellyfin sudo systemctl restart jellyfin
Configure VAAPI acceleration in the "Transcoding" page of the Admin Dashboard. Enter the
/dev/dri/renderD128device above as the
VA API Devicevalue.
Watch a movie, and verify that transcoding is occurring by watching the
radeontopor similar tools.
LXC or LXD Container
This has been tested with LXC 3.0 and may or may not work with older versions.
Follow the steps above to add the jellyfin user to the
render group, depending on your circumstances.
- Add your GPU to the container.
$ lxc config device add <container name> gpu gpu gid=<gid of your video or render group>
- Make sure you have the card within the container:
$ lxc exec jellyfin -- ls -l /dev/dri total 0 crw-rw---- 1 root video 226, 0 Jun 4 02:13 card0 crw-rw---- 1 root video 226, 0 Jun 4 02:13 controlD64 crw-rw---- 1 root video 226, 128 Jun 4 02:13 renderD128
Configure Jellyfin to use video acceleration and point it at the right device if the default option is wrong.
Try and play a video that requires transcoding and run the following, you should get a hit.
$ ps aux | grep ffmpeg | grep accel
- You can also try playing a video that requires transcoding, and if it plays you're good.
Raspberry Pi 3 and 4
- Add the Jellyfin service user to the video group to allow Jellyfin's FFMpeg process access to the encoder, and restart Jellyfin.
sudo usermod -aG video jellyfin
sudo systemctl restart jellyfin
If you are using a Raspberry Pi 4, you might need to run
sudo rpi-update for kernel and firmware updates.
OpenMAX OMXas the Hardware acceleration on the Transcoding tab of the Server Dashboard.
Change the amount of memory allocated to the GPU. The GPU can't handle accelerated decoding and encoding simultaneously.
sudo nano /boot/config.txt
For RPi4, add the line
gpu_mem=320See more Here
For RPi3, add the line
You can set any value, but 320 is recommended amount for 4K HEVC.
vcgencmd get_mem arm && vcgencmd get_mem gputo verify the split between CPU and GPU memory.
vcgencmd measure_temp && vcgencmd measure_clock armto monitor the temperature and clock speed of the CPU.
RPi4 currently doesn't support HWA decoding, only HWA encoding of H.264. Active cooling is required, passive cooling is insufficient for transcoding. For RPi3 in testing, transcoding was not working fast enough to run in real time because the video was being resized.
To verify that you are using the proper libraries, run this command against your transcoding log. This can be found at Admin Dashboard > Logs, and /var/log/jellyfin if instead via the repository.
grep -A2 'Stream mapping:' /var/log/jellyfin/ffmpeg-transcode-85a68972-7129-474c-9c5d-2d9949021b44.txt
This returned the following results.
Stream mapping: Stream #0:0 -> #0:0 (hevc (native) -> h264 (h264_omx)) Stream #0:1 -> #0:1 (aac (native) -> mp3 (libmp3lame))
Stream #0:0 used software to decode HEVC and used HWA to encode.
Stream #0:1 had the same results. Decoding is easier than encoding so these are good results overall. HWA decoding is a work in progress.