Skip to content Skip to sidebar Skip to footer

Display Stream With Ffmpeg, Python And Opencv

Situation : I have a basler camera connected to a raspberry pi, and I am trying to livestream it's feed with FFmpg to a tcp port in my windows PC in order to monitor whats happenin

Solution 1:

You can read the decoded frame from p1.stdout, convert it to NumPy array, and reshape it.

  • Change command to get decoded frames in rawvideo format and BGR pixel format:

    command = ['C:/ffmpeg/bin/ffmpeg.exe',
                '-rtsp_flags', 'listen',
                '-i', 'rtsp://192.168.1.xxxx:5555/live.sdp?tcp?',
                '-f', 'image2pipe',    # Use image2pipe demuxer'-pix_fmt', 'bgr24',   # Set BGR pixel format'-vcodec', 'rawvideo', # Get rawvideo output format.'-']
    
  • Read the raw video frame from p1.stdout:

    raw_frame = p1.stdout.read(width*height*3)
    
  • Convert the bytes read into a NumPy array, and reshape it to video frame dimensions:

    frame = np.fromstring(raw_frame, np.uint8)
     frame = frame.reshape((height, width, 3))
    

Now you can show the frame calling cv2.imshow('image', frame).

The solution assumes, you know the video frame size (width and height) from advance.

The code sample below, includes a part that reads width and height using cv2.VideoCapture, but I am not sure if it's going to work in your case (due to '-rtsp_flags', 'listen'. (If it does work, you can try capturing using OpenCV instead of FFmpeg).

The following code is a complete "working sample" that uses public RTSP Stream for testing:

import cv2
import numpy as np
import subprocess

# Use public RTSP Stream for testing
in_stream = 'rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov'ifFalse:
    # Read video width, height and framerate using OpenCV (use it if you don't know the size of the video frames).# Use public RTSP Streaming for testing:
    cap = cv2.VideoCapture(in_stream)

    framerate = cap.get(5) #frame rate# Get resolution of input video
    width  = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

    # Release VideoCapture - it was used just for getting video resolution
    cap.release()
else:
    # Set the size here, if video frame size is known
    width = 240
    height = 160


command = ['C:/ffmpeg/bin/ffmpeg.exe',
           #'-rtsp_flags', 'listen',  # The "listening" feature is not working (probably because the stream is from the web)'-rtsp_transport', 'tcp',  # Force TCP (for testing)'-max_delay', '30000000',  # 30 seconds (sometimes needed because the stream is from the web).'-i', in_stream,
           '-f', 'image2pipe',
           '-pix_fmt', 'bgr24',
           '-vcodec', 'rawvideo', '-an', '-']

# Open sub-process that gets in_stream as input and uses stdout as an output PIPE.
p1 = subprocess.Popen(command, stdout=subprocess.PIPE)

whileTrue:
    # read width*height*3 bytes from stdout (1 frame)
    raw_frame = p1.stdout.read(width*height*3)

    iflen(raw_frame) != (width*height*3):
        print('Error reading frame!!!')  # Break the loop in case of an error (too few bytes were read).break# Convert the bytes read into a NumPy array, and reshape it to video frame dimensions
    frame = np.fromstring(raw_frame, np.uint8)
    frame = frame.reshape((height, width, 3))

    # Show video frame
    cv2.imshow('image', frame)
    cv2.waitKey(1)
  
# Wait one more second and terminate the sub-processtry:
    p1.wait(1)
except (sp.TimeoutExpired):
    p1.terminate()

cv2.destroyAllWindows()

Sample frame (just for fun): enter image description here


Update:

Reading width and height using FFprobe:

When you don't know the video resolution from advance, you may use FFprobe for getting the information.

Here is a code sample for reading width and height using FFprobe:

import subprocess
import json

# Use public RTSP Stream for testing
in_stream = 'rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov'

probe_command = ['C:/ffmpeg/bin/ffprobe.exe',
                 '-loglevel', 'error',
                 '-rtsp_transport', 'tcp',  # Force TCP (for testing)]'-select_streams', 'v:0',  # Select only video stream 0.'-show_entries', 'stream=width,height', # Select only width and height entries'-of', 'json', # Get output in JSON format
                 in_stream]

# Read video width, height using FFprobe:
p0 = subprocess.Popen(probe_command, stdout=subprocess.PIPE)
probe_str = p0.communicate()[0] # Reading content of p0.stdout (output of FFprobe) as string
p0.wait()
probe_dct = json.loads(probe_str) # Convert string from JSON format to dictonary.# Get width and height from the dictonary
width = probe_dct['streams'][0]['width']
height = probe_dct['streams'][0]['height']

Post a Comment for "Display Stream With Ffmpeg, Python And Opencv"