一套rk3588 rtsp服务器推流的 github 方案及记录 -02
整体方案参考上一篇博文
https://blog.csdn.net/qq_31764341/article/details/134810566
本篇博文主要介绍基于RK3588进行硬解码
还是之前的套路,我不生产代码,我只是代码的搬运工,今天我们搬运瑞芯微的官方代码,并记录下来整个调试历程。两篇文章下来,我们3588上面的流肯定能出来
代码贴的特别详细。。。 希望不要取关 谢谢 然后后面再出一个 硬件解码的ffmpeg 编译以及opencv拉流示例代码
mpp库文件
rk3588 编码有自己的demo 这个demo一般存在于下面的这坨东西里
4. you can get demo about mpp applied to linux and android.
Liunx : https://github.com/WainDing/mpp_linux_cpp
https://github.com/MUZLATAN/ffmpeg_rtsp_mpp
Android : https://github.com/c-xh/RKMediaCodecDemo
5. offical github: https://github.com/rockchip-linux/mpp
develop github: https://github.com/HermanChen/mpp
develop gitee : https://gitee.com/hermanchen82/mpp
6. Commit message format should base on https://keepachangelog.com/en/1.0.0/
More document can be found at http://opensource.rock-chips.com/wiki_Mpp
一般我们把这个链接的东西down下来,然后传到板子上,在mpp路径下执行
make
make install
https://github.com/rockchip-linux/mpp
就能够在test路径下面拿到测试的app程序了,大概是这样的
xxx@orangepi5plus:~/xxx/mpp-develop/test$ ls
CMakeFiles mpi_dec_mt_test mpi_dec_test mpi_rc2_test mpp_info_test.c vpu_api_test
cmake_install.cmake mpi_dec_mt_test.c mpi_dec_test.c mpi_rc2_test.c mpp_parse_cfg.c vpu_api_test.c
CMakeLists.txt mpi_dec_multi_test mpi_enc_mt_test mpi_rc.cfg mpp_parse_cfg.h
dec.yuv mpi_dec_multi_test.c mpi_enc_mt_test.cpp mpp_event_trigger.c output.h264
gastest.o mpi_dec_nt_test mpi_enc_test mpp_event_trigger.h output.yuv
Makefile mpi_dec_nt_test.c mpi_enc_test.c mpp_info_test README.md
编码
这篇先说264编码,为了蹭一口流量卷,拆开两篇文章说
编码 encode ,这个路径下的encode的两个都是编码例程,一个多线程,一个单线程,对应的程序文件也在同路径下面
自己的工程使用他的编解码
如果是要把他的文件放在其他路径下面用,CmakeLists.txt需要按照下面的写就可以了
cmake_minimum_required(VERSION 3.5)
project(rtspserver)
set(CMAKE_INCLUDE_CURRENT_DIR ON)
# -g 开启调试
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DSOCKLEN_T=socklen_t -g ")
#find_package(OpenCV 4.8.0 REQUIRED)
#find_package(OpenSSL REQUIRED)
# MPP
set(MPP_PATH /home/orangepi/XXX/mpp-develop/inc)
set(MPP_LIBS /home/orangepi/XXX/mpp-develop/mpp/librockchip_mpp.so)
include_directories(${MPP_PATH})
# OSAL
set(OSAL_PATH /home/orangepi/XXX/mpp-develop/osal/inc/ /home/orangepi/XXX/mpp-develop/utils)
set(OSAL_LIBS /home/orangepi/XXX/mpp-develop/osal/libosal.a /home/orangepi/XXX/mpp-develop/utils/libutils.a)
include_directories(${OSAL_PATH})
# RGA RGA库 放到同文件夹下面的/3rdparty/rga/RK3588 这个地方
set(RGA_PATH ${CMAKE_SOURCE_DIR}/3rdparty/rga/RK3588)
set(RGA_LIB ${RGA_PATH}/lib/Linux/aarch64/librga.so)
include_directories(${RGA_PATH}/include)
# 写你的main.cpp 文件 rtsp是生成的二进制可执行文件名
add_executable(rtsp
example/rtsp_server.cpp
)
# 这里一定要把库链接进来
target_link_libraries(rtsp ${MPP_LIBS} ${OSAL_LIBS} ${OpenCV_LIBS} ${RGA_LIB} OpenSSL::SSL OpenSSL::Crypto )
Encoder程序拆解
他支持三种方式,这个地方要看他的官方文档支持,路径一般是:Rk3588-linux-v002\linux\docs\Linux\Multimedia\Rockchip_Developer_Guide_MPP_CN.pdf,(如果有人需要Rk3588-linux-v002下面的东西。。。 可以私聊我 售价30,或者在其他地方找一找)
他是说我们使用编码器/解码器的三种方式,第二章从接口角度介绍,第三章从应用角度介绍,前两种方式使用MppPacket 与 MppFrame这对结构,第三种高级模式是使用MppTask。前两种一种是无脑往里塞,一种是告诉分配空间大小和格式往里塞,第三种是自己定义task ,只研究了linux demo里面的最简单的方式,带mt的那个是多线程,也没研究多线。主打的一个先能用
mpi_enc_test.c
粗解析一下 后面贴我整理后的底层
这里推荐一个方法可以快速的开发,在他的Cmakelist.txt 的C/Cpp 编译flags 里面加"-g",然后重新生成,再配合我前面的有一篇vscode远程debug 配置,直接可以逐行运行,然后确定他最后运行的配置。
main
先看main
RK_S32 ret = MPP_NOK;
// MpiEncTestArgs 初始化 判断是否开多线程
MpiEncTestArgs* cmd = mpi_enc_test_cmd_get();
// parse the cmd option 使用argc argv解析,给cmd赋值
ret = mpi_enc_test_cmd_update_by_args(cmd, argc, argv);
if (ret)
goto DONE;//
mpi_enc_test_cmd_show_opt(cmd);// 打印 解析到的输入参数信息
ret = enc_test_multi(cmd, argv[0]);// 按照输入参数执行
enc_test_multi
重点就是下面这句创建了线程,然后就是判断frame是否全部处理完或者键盘输入回车,二者有一个,就开始中断线程。
ret = pthread_create(&ctxs[i].thd, NULL, enc_test, &ctxs[i]);
他这里很多参数都是配置用于计时的,自己用可以适当删减。
enc_test
mpp_buffer_get 是获取参数的,他的例程配置了很多,有的就没有任何变化,拿了一个原始数据,然后再给配置上,目的应该是为了给演示操作。这个自己看需求应该可以删一部分,不过我没试。
下面是核心语句
ret = test_ctx_init(info);// ctx 初始化
// 拿到内部Buffer group的类型
ret = mpp_buffer_group_get_internal(&p->buf_grp, MPP_BUFFER_TYPE_DRM);
// 拿取Buffer 到 p->frm_buf指针
ret = mpp_buffer_get(p->buf_grp, &p->frm_buf, p->frame_size + p->header_size);
ret = mpp_buffer_get(p->buf_grp, &p->pkt_buf, p->frame_size);
ret = mpp_buffer_get(p->buf_grp, &p->md_info, p->mdinfo_size);
ret = mpp_create(&p->ctx, &p->mpi);// 创建mpp实例
p->mpi->control(p->ctx, MPP_SET_OUTPUT_TIMEOUT, &timeout);// 设置超时时间
mpp_init(p->ctx, MPP_CTX_ENC, p->type);//mpp初始化
mpp_enc_cfg_init(&p->cfg);// 编码器初始化
p->mpi->control(p->ctx, MPP_ENC_GET_CFG, p->cfg);
test_mpp_enc_cfg_setup(info);// 大批量初始化参数
ret = test_mpp_run(info);// 处理一帧
test_mpp_run
这个函数就是完整的处理一帧数据
这玩意总体上干的事情就是 你通过buf 填充一帧数据给frame,然后调用put语句把frame送入编码器,再通过get 语句拿到
while (!p->pkt_eos)// 判断这个packet是否放入结束
// 下面这句 是把fp_input的数据读到初始化的buf里面,buf是在前面初始化的时候从mpp里面拿过来的地址,使用这个拿的指针mpp_buffer_get_ptr,当指针用就行
ret = read_image(buf, p->fp_input, p->width, p->height,
p->hor_stride, p->ver_stride, p->fmt);
// 另一个判断是要去从相机里面拿数据,使用的是v4l2的库拿的 没认真看
ret = mpp_frame_init(&frame);
// 初始化 设定frame格式 也就是编码输入
mpp_frame_set_width(frame, p->width);
mpp_frame_set_height(frame, p->height);
mpp_frame_set_hor_stride(frame, p->hor_stride);
mpp_frame_set_ver_stride(frame, p->ver_stride);
mpp_frame_set_fmt(frame, p->fmt);
mpp_frame_set_eos(frame, p->frm_eos);
mpp_frame_set_buffer(frame, p->frm_buf);
// 初始化 packet 也就是编码结果的格式
meta = mpp_frame_get_meta(frame);
mpp_packet_init_with_buffer(&packet, p->pkt_buf);
/* NOTE: It is important to clear output packet length!! */
mpp_packet_set_length(packet, 0);
mpp_meta_set_packet(meta, KEY_OUTPUT_PACKET, packet);
mpp_meta_set_buffer(meta, KEY_MOTION_INFO, p->md_info);
// 中间是可选项 osd userdata roi啥的
// 扔进去一帧图像
ret = mpi->encode_put_frame(ctx, frame);
// 拿回来一个packet 一次不一定能拿完 没拿完再do循环里 拿完就跳出来了
ret = mpi->encode_get_packet(ctx, &packet);
// 把结果写入文件
fwrite(ptr, 1, len, p->fp_output);
我的底层
使用PostAframe 操作一帧
encoder.cpp,我这里是能用的…
#include "encoder.h"
#include <opencv2/opencv.hpp>
//#include "videoThread.h"
// #include <liveMedia.hh>
// #include <GroupsockHelper.hh>
// #include <BasicUsageEnvironment.hh>
// #include <H264VideoRTPSource.hh>
#include <unistd.h>
/* 创建有名管道,写数据 */
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdio.h>
// #include <QDebug>
// 用于初始化。
MPP_RET test_ctx_init(MpiEncMultiCtxInfo *info)
{
MpiEncTestData *p = &info->ctx;
MPP_RET ret = MPP_OK;
// get paramter from cmd
p->width = 1280;
p->height = 720;
p->hor_stride = (MPP_ALIGN(p->width, 16)*3);
p->ver_stride = (MPP_ALIGN(p->height, 16));
p->fmt = MPP_FMT_RGB888;
p->type = MPP_VIDEO_CodingAVC;
p->bps = 0;
p->bps_min = 0;
p->bps_max = 0;
p->rc_mode = MPP_ENC_RC_MODE_VBR;
p->frame_num = 1;
p->gop_mode = 0;
p->gop_len = 5;
p->vi_len = 0;
p->fps_in_flex = 0;
p->fps_in_den = 0;
p->fps_in_num = 0;
p->fps_out_flex = 0;
p->fps_out_den = 0;
p->fps_out_num = 0;
p->scene_mode = 0;
p->mdinfo_size = (MPP_VIDEO_CodingHEVC == p->type) ?
(MPP_ALIGN(p->hor_stride, 32) >> 5) *
(MPP_ALIGN(p->ver_stride, 32) >> 5) * 16 :
(MPP_ALIGN(p->hor_stride, 64) >> 6) *
(MPP_ALIGN(p->ver_stride, 16) >> 4) * 16;
// p->fp_input = fopen("/home/orangepi/code/mpp-develop/test/dog_bike_car_448x448.bgr", "rb");
// if (NULL == p->fp_input) {
// mpp_err("failed to open input file %s\n", "/home/orangepi/code/mpp-develop/test/dog_bike_car_448x448.bgr");
// mpp_err("create default yuv image for test\n");
// }
// }
// } 有个指针没有初始化 是数据输入指针 p->fp_input
// 测试 264 输出
p->fp_output = fopen("./1.h264", "w+b");
if (NULL == p->fp_output) {
mpp_err("failed to open output file %s\n", p->fp_output);
ret = MPP_ERR_OPEN_FILE;
}
// }
// update resource parameter
switch (p->fmt & MPP_FRAME_FMT_MASK) {
case MPP_FMT_YUV420SP:
case MPP_FMT_YUV420P: {
p->frame_size = MPP_ALIGN(p->hor_stride, 64) * MPP_ALIGN(p->ver_stride, 64) * 3 / 2;
} break;
case MPP_FMT_YUV422_YUYV :
case MPP_FMT_YUV422_YVYU :
case MPP_FMT_YUV422_UYVY :
case MPP_FMT_YUV422_VYUY :
case MPP_FMT_YUV422P :
case MPP_FMT_YUV422SP : {
p->frame_size = MPP_ALIGN(p->hor_stride, 64) * MPP_ALIGN(p->ver_stride, 64) * 2;
} break;
case MPP_FMT_RGB444 :
case MPP_FMT_BGR444 :
case MPP_FMT_RGB555 :
case MPP_FMT_BGR555 :
case MPP_FMT_RGB565 :
case MPP_FMT_BGR565 :
case MPP_FMT_RGB888 :
case MPP_FMT_BGR888 :
case MPP_FMT_RGB101010 :
case MPP_FMT_BGR101010 :
case MPP_FMT_ARGB8888 :
case MPP_FMT_ABGR8888 :
case MPP_FMT_BGRA8888 :
case MPP_FMT_RGBA8888 : {
p->frame_size = MPP_ALIGN(p->hor_stride, 64) * MPP_ALIGN(p->ver_stride, 64);
} break;
default: {
p->frame_size = MPP_ALIGN(p->hor_stride, 64) * MPP_ALIGN(p->ver_stride, 64) * 4;
} break;
}
if (MPP_FRAME_FMT_IS_FBC(p->fmt)) {
if ((p->fmt & MPP_FRAME_FBC_MASK) == MPP_FRAME_FBC_AFBC_V1)
p->header_size = MPP_ALIGN(MPP_ALIGN(p->width, 16) * MPP_ALIGN(p->height, 16) / 16, SZ_4K);
else
p->header_size = MPP_ALIGN(p->width, 16) * MPP_ALIGN(p->height, 16) / 16;
} else {
p->header_size = 0;
}
return ret;
}
MPP_RET test_mpp_enc_cfg_setup(MpiEncMultiCtxInfo *info)
{
MpiEncTestData *p = &info->ctx;
MppApi *mpi = p->mpi;
MppCtx ctx = p->ctx;
MppEncCfg cfg = p->cfg;
RK_U32 quiet = 0;
MPP_RET ret;
RK_U32 rotation;
RK_U32 mirroring;
RK_U32 flip;
RK_U32 gop_mode = p->gop_mode;
MppEncRefCfg ref = NULL;
/* setup default parameter */
if (p->fps_in_den == 0)
p->fps_in_den = 1;
if (p->fps_in_num == 0)
p->fps_in_num = 15;
if (p->fps_out_den == 0)
p->fps_out_den = 1;
if (p->fps_out_num == 0)
p->fps_out_num = 15;
if (!p->bps)
p->bps = p->width * p->height / 8 * (p->fps_out_num / p->fps_out_den);
mpp_enc_cfg_set_s32(cfg, "tune:scene_mode", p->scene_mode);
mpp_enc_cfg_set_s32(cfg, "prep:width", p->width);
mpp_enc_cfg_set_s32(cfg, "prep:height", p->height);
mpp_enc_cfg_set_s32(cfg, "prep:hor_stride", p->hor_stride);
mpp_enc_cfg_set_s32(cfg, "prep:ver_stride", p->ver_stride);
mpp_enc_cfg_set_s32(cfg, "prep:format", p->fmt);
mpp_enc_cfg_set_s32(cfg, "rc:mode", p->rc_mode);
/* fix input / output frame rate */
mpp_enc_cfg_set_s32(cfg, "rc:fps_in_flex", p->fps_in_flex);
mpp_enc_cfg_set_s32(cfg, "rc:fps_in_num", p->fps_in_num);
mpp_enc_cfg_set_s32(cfg, "rc:fps_in_denorm", p->fps_in_den);
mpp_enc_cfg_set_s32(cfg, "rc:fps_out_flex", p->fps_out_flex);
mpp_enc_cfg_set_s32(cfg, "rc:fps_out_num", p->fps_out_num);
mpp_enc_cfg_set_s32(cfg, "rc:fps_out_denorm", p->fps_out_den);
/* drop frame or not when bitrate overflow */
mpp_enc_cfg_set_u32(cfg, "rc:drop_mode", MPP_ENC_RC_DROP_FRM_DISABLED);
mpp_enc_cfg_set_u32(cfg, "rc:drop_thd", 20); /* 20% of max bps */
mpp_enc_cfg_set_u32(cfg, "rc:drop_gap", 1); /* Do not continuous drop frame */
/* setup bitrate for different rc_mode */
mpp_enc_cfg_set_s32(cfg, "rc:bps_target", p->bps);
switch (p->rc_mode) {
case MPP_ENC_RC_MODE_FIXQP : {
/* do not setup bitrate on FIXQP mode */
} break;
case MPP_ENC_RC_MODE_CBR : {
/* CBR mode has narrow bound */
mpp_enc_cfg_set_s32(cfg, "rc:bps_max", p->bps_max ? p->bps_max : p->bps * 17 / 16);
mpp_enc_cfg_set_s32(cfg, "rc:bps_min", p->bps_min ? p->bps_min : p->bps * 15 / 16);
} break;
case MPP_ENC_RC_MODE_VBR :
case MPP_ENC_RC_MODE_AVBR : {
/* VBR mode has wide bound */
mpp_enc_cfg_set_s32(cfg, "rc:bps_max", p->bps_max ? p->bps_max : p->bps * 17 / 16);
mpp_enc_cfg_set_s32(cfg, "rc:bps_min", p->bps_min ? p->bps_min : p->bps * 1 / 16);
} break;
default : {
/* default use CBR mode */
mpp_enc_cfg_set_s32(cfg, "rc:bps_max", p->bps_max ? p->bps_max : p->bps * 17 / 16);
mpp_enc_cfg_set_s32(cfg, "rc:bps_min", p->bps_min ? p->bps_min : p->bps * 15 / 16);
} break;
}
/* setup qp for different codec and rc_mode */
switch (p->type) {
case MPP_VIDEO_CodingAVC :
case MPP_VIDEO_CodingHEVC : {
switch (p->rc_mode) {
case MPP_ENC_RC_MODE_FIXQP : {
RK_S32 fix_qp = 0;
mpp_enc_cfg_set_s32(cfg, "rc:qp_init", fix_qp);
mpp_enc_cfg_set_s32(cfg, "rc:qp_max", fix_qp);
mpp_enc_cfg_set_s32(cfg, "rc:qp_min", fix_qp);
mpp_enc_cfg_set_s32(cfg, "rc:qp_max_i", fix_qp);
mpp_enc_cfg_set_s32(cfg, "rc:qp_min_i", fix_qp);
mpp_enc_cfg_set_s32(cfg, "rc:qp_ip", 0);
mpp_enc_cfg_set_s32(cfg, "rc:fqp_min_i", fix_qp);
mpp_enc_cfg_set_s32(cfg, "rc:fqp_max_i", fix_qp);
mpp_enc_cfg_set_s32(cfg, "rc:fqp_min_p", fix_qp);
mpp_enc_cfg_set_s32(cfg, "rc:fqp_max_p", fix_qp);
} break;
case MPP_ENC_RC_MODE_CBR :
case MPP_ENC_RC_MODE_VBR :
case MPP_ENC_RC_MODE_AVBR : {
mpp_enc_cfg_set_s32(cfg, "rc:qp_init", -1);
mpp_enc_cfg_set_s32(cfg, "rc:qp_max", 51);
mpp_enc_cfg_set_s32(cfg, "rc:qp_min", 10);
mpp_enc_cfg_set_s32(cfg, "rc:qp_max_i", 51);
mpp_enc_cfg_set_s32(cfg, "rc:qp_min_i", 10);
mpp_enc_cfg_set_s32(cfg, "rc:qp_ip", 2);
mpp_enc_cfg_set_s32(cfg, "rc:fqp_min_i", 10);
mpp_enc_cfg_set_s32(cfg, "rc:fqp_max_i", 51);
mpp_enc_cfg_set_s32(cfg, "rc:fqp_min_p", 10);
mpp_enc_cfg_set_s32(cfg, "rc:fqp_max_p", 51);
} break;
default : {
mpp_err_f("unsupport encoder rc mode %d\n", p->rc_mode);
} break;
}
} break;
case MPP_VIDEO_CodingVP8 : {
/* vp8 only setup base qp range */
mpp_enc_cfg_set_s32(cfg, "rc:qp_init", 40);
mpp_enc_cfg_set_s32(cfg, "rc:qp_max", 127);
mpp_enc_cfg_set_s32(cfg, "rc:qp_min", 0);
mpp_enc_cfg_set_s32(cfg, "rc:qp_max_i",127);
mpp_enc_cfg_set_s32(cfg, "rc:qp_min_i", 0);
mpp_enc_cfg_set_s32(cfg, "rc:qp_ip", 6);
} break;
case MPP_VIDEO_CodingMJPEG : {
/* jpeg use special codec config to control qtable */
mpp_enc_cfg_set_s32(cfg, "jpeg:q_factor", 80);
mpp_enc_cfg_set_s32(cfg, "jpeg:qf_max", 99);
mpp_enc_cfg_set_s32(cfg, "jpeg:qf_min", 1);
} break;
default : {
} break;
}
/* setup codec */
mpp_enc_cfg_set_s32(cfg, "codec:type", p->type);
switch (p->type) {
case MPP_VIDEO_CodingAVC : {
RK_U32 constraint_set;
/*
* H.264 profile_idc parameter
* 66 - Baseline profile
* 77 - Main profile
* 100 - High profile
*/
mpp_enc_cfg_set_s32(cfg, "h264:profile", 100);
/*
* H.264 level_idc parameter
* 10 / 11 / 12 / 13 - qcif@15fps / cif@7.5fps / cif@15fps / cif@30fps
* 20 / 21 / 22 - cif@30fps / half-D1@@25fps / D1@12.5fps
* 30 / 31 / 32 - D1@25fps / 720p@30fps / 720p@60fps
* 40 / 41 / 42 - 1080p@30fps / 1080p@30fps / 1080p@60fps
* 50 / 51 / 52 - 4K@30fps
*/
mpp_enc_cfg_set_s32(cfg, "h264:level", 40);
mpp_enc_cfg_set_s32(cfg, "h264:cabac_en", 1);
mpp_enc_cfg_set_s32(cfg, "h264:cabac_idc", 0);
mpp_enc_cfg_set_s32(cfg, "h264:trans8x8", 1);
mpp_env_get_u32("constraint_set", &constraint_set, 0);
if (constraint_set & 0x3f0000)
mpp_enc_cfg_set_s32(cfg, "h264:constraint_set", constraint_set);
} break;
case MPP_VIDEO_CodingHEVC :
case MPP_VIDEO_CodingMJPEG :
case MPP_VIDEO_CodingVP8 : {
} break;
default : {
mpp_err_f("unsupport encoder coding type %d\n", p->type);
} break;
}
p->split_mode = 0;
p->split_arg = 0;
p->split_out = 0;
mpp_env_get_u32("split_mode", &p->split_mode, MPP_ENC_SPLIT_NONE);
mpp_env_get_u32("split_arg", &p->split_arg, 0);
mpp_env_get_u32("split_out", &p->split_out, 0);
if (p->split_mode) {
mpp_log_q(quiet, "%p split mode %d arg %d out %d\n", ctx,
p->split_mode, p->split_arg, p->split_out);
mpp_enc_cfg_set_s32(cfg, "split:mode", p->split_mode);
mpp_enc_cfg_set_s32(cfg, "split:arg", p->split_arg);
mpp_enc_cfg_set_s32(cfg, "split:out", p->split_out);
}
mpp_env_get_u32("mirroring", &mirroring, 0);
mpp_env_get_u32("rotation", &rotation, 0);
mpp_env_get_u32("flip", &flip, 0);
mpp_enc_cfg_set_s32(cfg, "prep:mirroring", mirroring);
mpp_enc_cfg_set_s32(cfg, "prep:rotation", rotation);
mpp_enc_cfg_set_s32(cfg, "prep:flip", flip);
// config gop_len and ref cfg
mpp_enc_cfg_set_s32(cfg, "rc:gop", p->gop_len ? p->gop_len : p->fps_out_num * 2);
mpp_env_get_u32("gop_mode", &gop_mode, gop_mode);
if (gop_mode) {
mpp_enc_ref_cfg_init(&ref);
if (p->gop_mode < 4)
mpi_enc_gen_ref_cfg(ref, gop_mode);
else
mpi_enc_gen_smart_gop_ref_cfg(ref, p->gop_len, p->vi_len);
mpp_enc_cfg_set_ptr(cfg, "rc:ref_cfg", ref);
}
ret = mpi->control(ctx, MPP_ENC_SET_CFG, cfg);
if (ret) {
mpp_err("mpi control enc set cfg failed ret %d\n", ret);
goto RET;
}
if (ref)
mpp_enc_ref_cfg_deinit(&ref);
/* optional */
{
RK_U32 sei_mode;
mpp_env_get_u32("sei_mode", &sei_mode, MPP_ENC_SEI_MODE_ONE_FRAME);
p->sei_mode = (MppEncSeiMode)sei_mode;
ret = mpi->control(ctx, MPP_ENC_SET_SEI_CFG, &p->sei_mode);
if (ret) {
mpp_err("mpi control enc set sei cfg failed ret %d\n", ret);
goto RET;
}
}
if (p->type == MPP_VIDEO_CodingAVC || p->type == MPP_VIDEO_CodingHEVC) {
p->header_mode = MPP_ENC_HEADER_MODE_EACH_IDR;
ret = mpi->control(ctx, MPP_ENC_SET_HEADER_MODE, &p->header_mode);
if (ret) {
mpp_err("mpi control enc set header mode failed ret %d\n", ret);
goto RET;
}
}
/* setup test mode by env */
mpp_env_get_u32("osd_enable", &p->osd_enable, 0);
mpp_env_get_u32("osd_mode", &p->osd_mode, MPP_ENC_OSD_PLT_TYPE_DEFAULT);
mpp_env_get_u32("roi_enable", &p->roi_enable, 0);
mpp_env_get_u32("user_data_enable", &p->user_data_enable, 0);
if (p->roi_enable) {
mpp_enc_roi_init(&p->roi_ctx, p->width, p->height, p->type, 4);
mpp_assert(p->roi_ctx);
}
RET:
return ret;
}
MPP_RET test_ctx_deinit(MpiEncTestData *p)
{
if (p) {
// if (p->cam_ctx) {
// camera_source_deinit(p->cam_ctx);
// p->cam_ctx = NULL;
// }
if (p->fp_input) {
fclose(p->fp_input);
p->fp_input = NULL;
}
if (p->fp_output) {
fclose(p->fp_output);
p->fp_output = NULL;
}
if (p->fp_verify) {
fclose(p->fp_verify);
p->fp_verify = NULL;
}
}
return MPP_OK;
}
// using namespace std;
// int isOpen = false;
// const char * fifo_name = "/home/orangepi/code/live/testProgs/pipe.264";
// int pipe_fd;
// int mpp_packet_write_to_fifo(void *ptr, size_t len)
// {
// int ret = 0;
// if(!isOpen)
// {
// qDebug()<<"start open the fifo file\n";
// pipe_fd = open(fifo_name, O_WRONLY); //阻塞至读端打开
// if(pipe_fd != -1)
// {
// qDebug()<<("open fifo success\n");
// // qDebug()<<("thread mpp_packet_write_to_fifo, pipe_fd = %d\n", pipe_fd);
// isOpen = true;
// //return pipe_fd;
// }
// else
// {
// qDebug()<<("pipe file open error %s\n", strerror(errno));
// return -1;
// // pthread_exit(NULL);
// }
// }
// ret = write(pipe_fd, ptr, len);
// if(ret != len)
// {
// printf("=======Write fifo Err======\n");
// return -1;
// }
// return 0;
// }
MPP_RET test_mpp_run(MpiEncMultiCtxInfo *info,cv::Mat pic,char* &fs,int & length)
{
// MpiEncTestArgs *cmd = info->cmd;
MpiEncTestData *p = &info->ctx;
MppApi *mpi = p->mpi;
MppCtx ctx = p->ctx;
RK_U32 quiet = 0;
RK_S32 chn = info->chn;
RK_U32 cap_num = 0;
DataCrc checkcrc;
MPP_RET ret = MPP_OK;
p->frame_count = 0;
// 初始化 crc校验数据结构
memset(&checkcrc, 0, sizeof(checkcrc));
checkcrc.sum = mpp_malloc(RK_ULONG, 512);
// 一次完整的调用 包括帧初始化 de初始化 还有填充 获取
while (!p->pkt_eos) {
MppMeta meta = NULL;
MppFrame frame = NULL;
MppPacket packet = NULL;
void *buf = mpp_buffer_get_ptr(p->frm_buf);
RK_S32 cam_frm_idx = -1;
MppBuffer cam_buf = NULL;
RK_U32 eoi = 1;
// 数据实际输入
int width = pic.rows;
int height = pic.cols;
int totalBytes = width * height * 3; // 3个通道(BGR)每个通道占一个字节
memcpy(buf, pic.data, totalBytes);
// if (p->fp_input) {
// 测试图像 2
// int width = 1920;
// int height = 1080;
// // 创建一个空白图像,全黑
// cv::Mat colorBar = cv::Mat::zeros(height, width, CV_8UC3);
// // 设置彩条的宽度
// int barWidth = width / 8; // 8个彩条,你可以根据需要调整
// // 生成彩条
// for (int i = 0; i < 8; ++i) {
// // 计算彩条的起始和结束位置
// int startX = i * barWidth;
// int endX = (i + 1) * barWidth;
// // 设置彩条颜色(BGR格式)
// cv::Vec3b color;
// if (i % 2 == 0) {
// color = cv::Vec3b(255, 0, 0); // 蓝色
// } else {
// color = cv::Vec3b(0, 255, 0); // 绿色
// }
// // 在图像上画出彩条
// colorBar(cv::Rect(startX, 0, barWidth, height)) = color;
// }
// int totalBytes = width * height * 3; // 3个通道(BGR)每个通道占一个字节
// memcpy(buf, colorBar.data, totalBytes);
// 测试图像
// ret = read_image((RK_U8*)buf, p->fp_input, p->width, p->height,
// p->hor_stride, p->ver_stride, p->fmt);
// buf 存储图片数据 ,fp_input 应该是一个文件的fp 明天再开
if (ret == MPP_NOK ) {
p->frm_eos = 1;
// 判定是否满足结束标志 满足则加一个eos帧
if (p->frame_num < 0 || p->frame_count < p->frame_num) {
clearerr(p->fp_input);
rewind(p->fp_input);
p->frm_eos = 0;
mpp_log_q(quiet, "chn %d loop times %d\n", chn, ++p->loop_times);
continue;
}
mpp_log_q(quiet, "chn %d found last frame. feof %d\n", chn, feof(p->fp_input));
} else if (ret == MPP_ERR_VALUE)
goto RET;
// }
ret = mpp_frame_init(&frame);
if (ret) {
mpp_err_f("mpp_frame_init failed\n");
goto RET;
}
mpp_frame_set_width(frame, p->width);
mpp_frame_set_height(frame, p->height);
mpp_frame_set_hor_stride(frame, p->hor_stride);
mpp_frame_set_ver_stride(frame, p->ver_stride);
mpp_frame_set_fmt(frame, p->fmt);
mpp_frame_set_eos(frame, p->frm_eos);
// 使用我的分配内存的frm_buf指针 填充Frame
mpp_frame_set_buffer(frame, p->frm_buf);
meta = mpp_frame_get_meta(frame);
mpp_packet_init_with_buffer(&packet, p->pkt_buf);
/* NOTE: It is important to clear output packet length!! */
mpp_packet_set_length(packet, 0);
mpp_meta_set_packet(meta, KEY_OUTPUT_PACKET, packet);
mpp_meta_set_buffer(meta, KEY_MOTION_INFO, p->md_info);
// if (p->osd_enable || p->user_data_enable || p->roi_enable) {
// if (p->user_data_enable) {
// MppEncUserData user_data;
// char *str = "this is user data\n";
// if ((p->frame_count & 10) == 0) {
// user_data.pdata = str;
// user_data.len = strlen(str) + 1;
// mpp_meta_set_ptr(meta, KEY_USER_DATA, &user_data);
// }
// static RK_U8 uuid_debug_info[16] = {
// 0x57, 0x68, 0x97, 0x80, 0xe7, 0x0c, 0x4b, 0x65,
// 0xa9, 0x06, 0xae, 0x29, 0x94, 0x11, 0xcd, 0x9a
// };
// MppEncUserDataSet data_group;
// MppEncUserDataFull datas[2];
// char *str1 = "this is user data 1\n";
// char *str2 = "this is user data 2\n";
// data_group.count = 2;
// datas[0].len = strlen(str1) + 1;
// datas[0].pdata = str1;
// datas[0].uuid = uuid_debug_info;
// datas[1].len = strlen(str2) + 1;
// datas[1].pdata = str2;
// datas[1].uuid = uuid_debug_info;
// data_group.datas = datas;
// mpp_meta_set_ptr(meta, KEY_USER_DATAS, &data_group);
// }
// if (p->osd_enable) {
// /* gen and cfg osd plt */
// mpi_enc_gen_osd_plt(&p->osd_plt, p->frame_count);
// p->osd_plt_cfg.change = MPP_ENC_OSD_PLT_CFG_CHANGE_ALL;
// p->osd_plt_cfg.type = MPP_ENC_OSD_PLT_TYPE_USERDEF;
// p->osd_plt_cfg.plt = &p->osd_plt;
// ret = mpi->control(ctx, MPP_ENC_SET_OSD_PLT_CFG, &p->osd_plt_cfg);
// if (ret) {
// mpp_err("mpi control enc set osd plt failed ret %d\n", ret);
// goto RET;
// }
// /* gen and cfg osd plt */
// mpi_enc_gen_osd_data(&p->osd_data, p->buf_grp, p->width,
// p->height, p->frame_count);
// mpp_meta_set_ptr(meta, KEY_OSD_DATA, (void*)&p->osd_data);
// }
// if (p->roi_enable) {
// RoiRegionCfg *region = &p->roi_region;
// /* calculated in pixels */
// region->x = MPP_ALIGN(p->width / 8, 16);
// region->y = MPP_ALIGN(p->height / 8, 16);
// region->w = 128;
// region->h = 256;
// region->force_intra = 0;
// region->qp_mode = 1;
// region->qp_val = 24;
// mpp_enc_roi_add_region(p->roi_ctx, region);
// region->x = MPP_ALIGN(p->width / 2, 16);
// region->y = MPP_ALIGN(p->height / 4, 16);
// region->w = 256;
// region->h = 128;
// region->force_intra = 1;
// region->qp_mode = 1;
// region->qp_val = 10;
// mpp_enc_roi_add_region(p->roi_ctx, region);
// /* send roi info by metadata */
// mpp_enc_roi_setup_meta(p->roi_ctx, meta);
// }
// }
if (!p->first_frm)
p->first_frm = mpp_time();
/*
* NOTE: in non-block mode the frame can be resent.
* The default input timeout mode is block.
*
* User should release the input frame to meet the requirements of
* resource creator must be the resource destroyer.
*/
ret = mpi->encode_put_frame(ctx, frame);
if (ret) {
mpp_err("chn %d encode put frame failed\n", chn);
mpp_frame_deinit(&frame);
goto RET;
}
mpp_frame_deinit(&frame);
do {
ret = mpi->encode_get_packet(ctx, &packet);
if (ret) {
mpp_err("chn %d encode get packet failed\n", chn);
goto RET;
}
mpp_assert(packet);
if (packet) {
// write packet to file here
void *ptr = mpp_packet_get_pos(packet);
size_t len = mpp_packet_get_length(packet);
char log_buf[256];
RK_S32 log_size = sizeof(log_buf) - 1;
RK_S32 log_len = 0;
if (!p->first_pkt)
p->first_pkt = mpp_time();
p->pkt_eos = mpp_packet_get_eos(packet);
if (p->fp_output){
// fwrite(ptr, 1, len, p->fp_output);
fs = (char*)malloc(len*sizeof(char));
memcpy(fs, ptr, len);
length = len;
// fs = (char*)cpy;
//pipe close
// if(mpp_packet_write_to_fifo(cpy, len) < 0)//阻塞
// {
// // goto RET;
// printf(" mpp_packet_write_to_fifo err \n");
// }
// timeval ref;
// gettimeofday(&ref, NULL);
// fs->postFrame((char*)cpy,len,ref);
}
if (p->fp_verify && !p->pkt_eos) {
calc_data_crc((RK_U8 *)ptr, (RK_U32)len, &checkcrc);
mpp_log("p->frame_count=%d, len=%d\n", p->frame_count, len);
write_data_crc(p->fp_verify, &checkcrc);
}
log_len += snprintf(log_buf + log_len, log_size - log_len,
"encoded frame %-4d", p->frame_count);
/* for low delay partition encoding */
if (mpp_packet_is_partition(packet)) {
eoi = mpp_packet_is_eoi(packet);
log_len += snprintf(log_buf + log_len, log_size - log_len,
" pkt %d", p->frm_pkt_cnt);
p->frm_pkt_cnt = (eoi) ? (0) : (p->frm_pkt_cnt + 1);
}
log_len += snprintf(log_buf + log_len, log_size - log_len,
" size %-7zu", len);
if (mpp_packet_has_meta(packet)) {
meta = mpp_packet_get_meta(packet);
RK_S32 temporal_id = 0;
RK_S32 lt_idx = -1;
RK_S32 avg_qp = -1;
if (MPP_OK == mpp_meta_get_s32(meta, KEY_TEMPORAL_ID, &temporal_id))
log_len += snprintf(log_buf + log_len, log_size - log_len,
" tid %d", temporal_id);
if (MPP_OK == mpp_meta_get_s32(meta, KEY_LONG_REF_IDX, <_idx))
log_len += snprintf(log_buf + log_len, log_size - log_len,
" lt %d", lt_idx);
if (MPP_OK == mpp_meta_get_s32(meta, KEY_ENC_AVERAGE_QP, &avg_qp))
log_len += snprintf(log_buf + log_len, log_size - log_len,
" qp %d", avg_qp);
}
// mpp_log_q(quiet, "chn %d %s\n", chn, log_buf);
mpp_packet_deinit(&packet);
// fps_calc_inc(p->fps_out_num);
p->stream_size += len;
p->frame_count += eoi;
if (p->pkt_eos) {
mpp_log_q(quiet, "chn %d found last packet\n", chn);
mpp_assert(p->frm_eos);
}
}
} while (!eoi);
if (p->frame_num > 0 && p->frame_count >= p->frame_num)
break;
if (p->frm_eos && p->pkt_eos)
break;
}
RET:
MPP_FREE(checkcrc.sum);
return ret;
}
encoder::encoder()
{
}
int encoder::init(char * &fs,int &size)
{
memset(this->info,0,sizeof(MpiEncMultiCtxInfo));
MpiEncTestData *p = &info->ctx;
MpiEncMultiCtxRet *enc_ret = &info->ret;
MppApi *mpi = p->mpi;
MppCtx ctx = p->ctx;
MppPollType timeout = MPP_POLL_BLOCK;
MPP_RET ret = MPP_OK;
RK_S64 t_s = 0;
RK_S64 t_e = 0;
RK_U32 quiet ;
ret = test_ctx_init(info);
if (ret) {
mpp_err_f("test data init failed ret %d\n", ret);
goto MPP_TEST_OUT;
}
ret = mpp_buffer_group_get_internal(&p->buf_grp, MPP_BUFFER_TYPE_DRM);
if (ret) {
mpp_err_f("failed to get mpp buffer group ret %d\n", ret);
goto MPP_TEST_OUT;
}
ret = mpp_buffer_get(p->buf_grp, &p->frm_buf, p->frame_size + p->header_size);
if (ret) {
mpp_err_f("failed to get buffer for input frame ret %d\n", ret);
goto MPP_TEST_OUT;
}
ret = mpp_buffer_get(p->buf_grp, &p->pkt_buf, p->frame_size);
if (ret) {
mpp_err_f("failed to get buffer for output packet ret %d\n", ret);
goto MPP_TEST_OUT;
}
ret = mpp_buffer_get(p->buf_grp, &p->md_info, p->mdinfo_size);
if (ret) {
mpp_err_f("failed to get buffer for motion info output packet ret %d\n", ret);
goto MPP_TEST_OUT;
}
// encoder demo
ret = mpp_create(&p->ctx, &p->mpi);
if (ret) {
mpp_err("mpp_create failed ret %d\n", ret);
goto MPP_TEST_OUT;
}
mpp_log_q(quiet, "%p encoder test start w %d h %d type %d\n",
p->ctx, p->width, p->height, p->type);
ret = p->mpi->control(p->ctx, MPP_SET_OUTPUT_TIMEOUT, &timeout);
if (MPP_OK != ret) {
mpp_err("mpi control set output timeout %d ret %d\n", timeout, ret);
goto MPP_TEST_OUT;
}
ret = mpp_init(p->ctx, MPP_CTX_ENC, p->type);
if (ret) {
mpp_err("mpp_init failed ret %d\n", ret);
goto MPP_TEST_OUT;
}
ret = mpp_enc_cfg_init(&p->cfg);
if (ret) {
mpp_err_f("mpp_enc_cfg_init failed ret %d\n", ret);
goto MPP_TEST_OUT;
}
ret = p->mpi->control(p->ctx, MPP_ENC_GET_CFG, p->cfg);
if (ret) {
mpp_err_f("get enc cfg failed ret %d\n", ret);
goto MPP_TEST_OUT;
}
ret = test_mpp_enc_cfg_setup(info);
if (ret) {
mpp_err_f("test mpp setup failed ret %d\n", ret);
goto MPP_TEST_OUT;
}
p->pkt_eos = 0;
// prepare a frame head
if (p->type == MPP_VIDEO_CodingAVC || p->type == MPP_VIDEO_CodingHEVC) {
MppPacket packet = NULL;
/*
* Can use packet with normal malloc buffer as input not pkt_buf.
* Please refer to vpu_api_legacy.cpp for normal buffer case.
* Using pkt_buf buffer here is just for simplifing demo.
*/
mpp_packet_init_with_buffer(&packet, p->pkt_buf);
/* NOTE: It is important to clear output packet length!! */
mpp_packet_set_length(packet, 0);
ret = p->mpi->control(p->ctx, MPP_ENC_GET_HDR_SYNC, packet);
if (ret) {
mpp_err("mpi control enc get extra info failed\n");
// goto RET;
} else {
/* get and write sps/pps for H.264 */
void *ptr = mpp_packet_get_pos(packet);
size_t len = mpp_packet_get_length(packet);
if (p->fp_output)
fs = (char*)malloc(len*sizeof(char));
memcpy(fs, ptr, len);
size = len;
// fwrite(ptr, 1, len, p->fp_output);
}
mpp_packet_deinit(&packet);
return 1;
}
MPP_TEST_OUT:
if (p->ctx) {
mpp_destroy(p->ctx);
p->ctx = NULL;
}
if (p->cfg) {
mpp_enc_cfg_deinit(p->cfg);
p->cfg = NULL;
}
if (p->frm_buf) {
mpp_buffer_put(p->frm_buf);
p->frm_buf = NULL;
}
if (p->pkt_buf) {
mpp_buffer_put(p->pkt_buf);
p->pkt_buf = NULL;
}
if (p->md_info) {
mpp_buffer_put(p->md_info);
p->md_info = NULL;
}
if (p->osd_data.buf) {
mpp_buffer_put(p->osd_data.buf);
p->osd_data.buf = NULL;
}
if (p->buf_grp) {
mpp_buffer_group_put(p->buf_grp);
p->buf_grp = NULL;
}
if (p->roi_ctx) {
mpp_enc_roi_deinit(p->roi_ctx);
p->roi_ctx = NULL;
}
test_ctx_deinit(p);
return 0;
}
MPP_RET encoder::postAframe(cv::Mat pic,char* &fs,int & length ){
MPP_RET ret = MPP_NOK;
MpiEncTestData *p = &info->ctx;
ret = test_mpp_run(info,pic,fs,length);
POST_OUT:
return ret;
}
int encoder::deinit(MPP_RET ret = MPP_OK){
MpiEncTestData *p = &info->ctx;
if (ret) {
mpp_err_f("test mpp run failed ret %d\n", ret);
goto POST_OUT;
}
ret = p->mpi->reset(p->ctx);
if (ret) {
mpp_err("mpi->reset failed\n");
goto POST_OUT;
}
POST_OUT:
// return ret;
if (p->ctx) {
mpp_destroy(p->ctx);
p->ctx = NULL;
}
if (p->cfg) {
mpp_enc_cfg_deinit(p->cfg);
p->cfg = NULL;
}
if (p->frm_buf) {
mpp_buffer_put(p->frm_buf);
p->frm_buf = NULL;
}
if (p->pkt_buf) {
mpp_buffer_put(p->pkt_buf);
p->pkt_buf = NULL;
}
if (p->md_info) {
mpp_buffer_put(p->md_info);
p->md_info = NULL;
}
if (p->osd_data.buf) {
mpp_buffer_put(p->osd_data.buf);
p->osd_data.buf = NULL;
}
if (p->buf_grp) {
mpp_buffer_group_put(p->buf_grp);
p->buf_grp = NULL;
}
if (p->roi_ctx) {
mpp_enc_roi_deinit(p->roi_ctx);
p->roi_ctx = NULL;
}
test_ctx_deinit(p);
return 0;
}
encoder::~encoder(){
}
.h文件:
#ifndef ENCODER_H
#define ENCODER_H
#include <string.h>
#include "rk_mpi.h"
#include "mpp_env.h"
#include "mpp_mem.h"
#include "mpp_time.h"
#include "mpp_debug.h"
#include "mpp_common.h"
#include "utils.h"
#include "mpi_enc_utils.h"
//#include "camera_source.h"
#include "mpp_enc_roi_utils.h"
#include "mpp_rc_api.h"
#include <opencv2/opencv.hpp>
// #include "H264_V4l2DeviceSource.h"
typedef struct {
// base flow context
MppCtx ctx;
MppApi *mpi;
RK_S32 chn;
// global flow control flag
RK_U32 frm_eos;
RK_U32 pkt_eos;
RK_U32 frm_pkt_cnt;
RK_S32 frame_num;
RK_S32 frame_count;
RK_U64 stream_size;
/* end of encoding flag when set quit the loop */
volatile RK_U32 loop_end;
// src and dst
FILE *fp_input;
FILE *fp_output;
FILE *fp_verify;
/* encoder config set */
MppEncCfg cfg;
MppEncPrepCfg prep_cfg;
MppEncRcCfg rc_cfg;
MppEncCodecCfg codec_cfg;
MppEncSliceSplit split_cfg;
MppEncOSDPltCfg osd_plt_cfg;
MppEncOSDPlt osd_plt;
MppEncOSDData osd_data;
RoiRegionCfg roi_region;
MppEncROICfg roi_cfg;
// input / output
MppBufferGroup buf_grp;
MppBuffer frm_buf;
MppBuffer pkt_buf;
MppBuffer md_info;
MppEncSeiMode sei_mode;
MppEncHeaderMode header_mode;
// paramter for resource malloc
RK_U32 width;
RK_U32 height;
RK_U32 hor_stride;
RK_U32 ver_stride;
MppFrameFormat fmt;
MppCodingType type;
RK_S32 loop_times;
// CamSource *cam_ctx;
MppEncRoiCtx roi_ctx;
// resources
size_t header_size;
size_t frame_size;
size_t mdinfo_size;
/* NOTE: packet buffer may overflow */
size_t packet_size;
RK_U32 osd_enable;
RK_U32 osd_mode;
RK_U32 split_mode;
RK_U32 split_arg;
RK_U32 split_out;
RK_U32 user_data_enable;
RK_U32 roi_enable;
// rate control runtime parameter
RK_S32 fps_in_flex;
RK_S32 fps_in_den;
RK_S32 fps_in_num;
RK_S32 fps_out_flex;
RK_S32 fps_out_den;
RK_S32 fps_out_num;
RK_S32 bps;
RK_S32 bps_max;
RK_S32 bps_min;
RK_S32 rc_mode;
RK_S32 gop_mode;
RK_S32 gop_len;
RK_S32 vi_len;
RK_S32 scene_mode;
RK_S64 first_frm;
RK_S64 first_pkt;
} MpiEncTestData;
/* For each instance thread return value */
typedef struct {
float frame_rate;
RK_U64 bit_rate;
RK_S64 elapsed_time;
RK_S32 frame_count;
RK_S64 stream_size;
RK_S64 delay;
} MpiEncMultiCtxRet;
typedef struct {
MpiEncTestArgs *cmd; // pointer to global command line info
const char *name;
RK_S32 chn;
pthread_t thd; // thread for for each instance
MpiEncTestData ctx; // context of encoder
MpiEncMultiCtxRet ret; // return of encoder
} MpiEncMultiCtxInfo;
class encoder
{
public:
encoder();
~encoder();
int init(char * &fs,int &size);
int deinit(MPP_RET ret );
MPP_RET postAframe(cv::Mat pic,char* &fs,int & length );
// void run();
// void defaultInit();
private:
MpiEncMultiCtxInfo *info = new MpiEncMultiCtxInfo ;
};
#endif // VIDEOTHREAD_H
main.cpp
#include "encoder.h"
#include "decoder.h"
#include "iostream"
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <fstream>
#include <sstream>
void writeCharPointerToFile(const char* data, std::size_t size, const std::string& filename) {
// 打开文件
std::ofstream file(filename, std::ios::binary);
// 检查文件是否成功打开
if (!file.is_open()) {
std::cerr << "无法打开文件: " << filename << std::endl;
return;
}
// 写入数据
file.write(data, size);
// 关闭文件
file.close();
}
using namespace std;
using namespace cv;
int main(){
std::string filename = "output.txt";
int width = 1280;
int height = 720;
// 创建一个空白图像,全黑
cv::Mat colorBar = cv::Mat::zeros(height, width, CV_8UC3);
// 设置彩条的宽度
int barWidth = width / 8; // 8个彩条,你可以根据需要调整
// 生成彩条
for (int i = 0; i < 8; ++i) {
// 计算彩条的起始和结束位置
int startX = i * barWidth;
int endX = (i + 1) * barWidth;
// 设置彩条颜色(BGR格式)
cv::Vec3b color;
if (i % 2 == 0) {
color = cv::Vec3b(255, 0, 0); // 蓝色
} else {
color = cv::Vec3b(0, 255, 0); // 绿色
}
// 在图像上画出彩条
colorBar(cv::Rect(startX, 0, barWidth, height)) = color;
}
int totalBytes = width * height * 3; // 3个通道(BGR)每个通道占一个字节
// memcpy(buf, colorBar.data, totalBytes);
imwrite("1.jpg",colorBar);
encoder e;
decoder d;
char* frame;
int len;
e.init(frame,len);
d.init();
for(int i =0 ;i<1000;i++)
{
e.postAframe(colorBar,frame,len);
usleep(30*1000);
writeCharPointerToFile(frame, len, filename);
d.poststream(frame,len);
}
e.deinit(MPP_OK);
return 0;
}
调试建议
一个是前面说的 -g,debug 看配置,再有就是上一篇编码gop设小一点,如果是实时流的话
然后就是看你输出的东西对不对劲了,这个我推荐一个软件HxD hex editor 看二进制,可以在他的读入文件那里用一个jpg图像或者用opencv保存一个bgr的彩条图像,然后直接使用图像编码模式编码出来一个264文件测试,逐帧保存结果。看和他的结果是否一样,如果完全一致就对了,不一致的话需要找找原因。
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!