Flink Operator 使用指南 之 全局配置
2023-12-13 08:42:17
背景
在上一个章节中已经介绍了基本的Flink-Operator安装,但是在实际的数据中台的项目中,用户可能希望看到Flink Operator的运行日志情况,当然这可以通过修改Flink-Operator POD的文件实现卷挂载的形势将日志输出到宿主机器的指定目录下,但是这种办法对数据中台的产品不是特别友好,因此我们需要将Operator服务的日志输出到Kafka Appender中;因此我们需要修改Flink Operator的helm中的values配置文件文件,达成我们的目标.
默认情况下Flink Operator不支持Kafka Appender日志输出,为了支持改能力,需要在flink-operator的镜像编译的时候添加kafka依赖,这里小编是在flink-kubernetes-operator
模块的pom.xml文件中添加如下配置
<!--kafka版本信息-->
<properties>
<kafka-clients.version>2.2.0</kafka-clients.version>
</properties>
...
<!--kafka clients依赖-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka-clients.version}</version>
</dependency>
然后对flink-operator重新进行打包编译,重新发布flink-kubernetes-operator
镜像即可
日志采集
可以这么理解flink-operator服务是一个flink 任务提交的客户端,在设计的时候会在提交作业的时候允许用户覆盖log4j-console.properties
和log4j-operator.properties
配置文件,其中如果用户需要采集k8s作业容器的运行日志可以尝试配置log4j-console.properties
配置文件,如果是希望查看flink-operator服务本身的日志需要覆盖log4j-operator.properties
配置文件。
################################################################################
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
---
# List of kubernetes namespaces to watch for FlinkDeployment changes, empty means all namespaces.
# When enabled RBAC is only created for said namespaces, otherwise it is done for the cluster scope.
# watchNamespaces: ["flink"]
image:
repository: ghcr.io/apache/flink-kubernetes-operator
pullPolicy: IfNotPresent
tag: "51eeae1"
# If image digest is set then it takes precedence and the image tag will be ignored
# digest: ""
imagePullSecrets: []
# Replicas must be 1 unless operator leader election is configured
replicas: 1
# Strategy type must be Recreate unless leader election is configured
strategy:
type: Recreate
rbac:
create: true
# kubernetes.rest-service.exposed.type: NodePort requires
# list permission for nodes at the cluster scope.
# Set create to true if you are using NodePort type.
nodesRule:
create: false
operatorRole:
create: true
name: "flink-operator"
operatorRoleBinding:
create: true
name: "flink-operator-role-binding"
jobRole
文章来源:https://blog.csdn.net/weixin_38231448/article/details/134507176
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!