msdkdec: make sure to use video memory on Linux
The block that sets use_video_memory flag is after the
the condition if gst_msdk_context_prepare
but it
always returns false when there is no other msdk elements.
So the decoder ends up with use_video_memory as FALSE.
Note that msdkvpp always set use_video_memory as TRUE.
When use_video_memory is FALSE then the msdkdec allocates the output frames with posix_memalign (see gstmsdksystemmemory.c). The result is then copied back to the GstVideoPool's buffers (or to the downstream pool's buffers if any). When use_video_memory is TRUE then the msdkdec uses vaCreateSurfaces to create vaapi surfaces for the hw decoder to decode into (see gstmsdkvideomemory.c). The result is then copied to either the internal GstVideoPool and to the downstream pool if any. (vaDeriveImage/vaMapBuffer is used in order to read the surfaces)