I have an AMD 5600XT graphics card which is supposed to support H.265 encoding and decoding, but for some reason when I’m transcoding one file format to the other, the CPU is utilized instead.
Is this the anticipated behaviour or should ffmpeg be using the GPU for this process?
The current command I’m using is:
for i in *.mp4; do
ffmpeg -i "$i" -c:v libx265 -vtag hvc1 -threads 16 "temp/$i"
done
Libx265 is the software encoding (cpu)
For hardware acceleration it’s hevc_vaapi for h265 and h264_vaapi for h264 like as the exemple below from the link given by @Yochanan
The exemple above is for h264_vaapi it should work by replacing it by hevc_vaapi. And check the vaapi_device and change to the available one in the system.
There is certainly plenties of exemple on internet…
A simple search with ffmpeg hevc_vaapi should give you plenty of result
For nvidia nvenc it’s h264_nvenc and hevc_nvenc
There is also option to use acceleration for reading the input file… you can find with a search
After i never used amd card and i don’t know much about them… i only used nvenc