logstash同步mysql数据到es(三、es模板问题)

2023-12-13 11:01:28

[INFO ] 2023-12-11 09:57:44.471 [Converge PipelineAction::Create<pipeline1>] Reflections - Reflections took 62 ms to scan 1 urls, producing 131 keys and 463 values
[ERROR] 2023-12-11 09:57:45.399 [Converge PipelineAction::Create<pipeline1>] elasticsearch - Invalid setting for elasticsearch output plugin:

? output {
? ? elasticsearch {
? ? ? # This setting must be a path
? ? ? # File does not exist or cannot be opened /home/test/logstash/config/template.json
? ? ? template => "/home/test/logstash/config/template.json"
? ? ? ...
? ? }
? }
[ERROR] 2023-12-11 09:57:45.408 [Converge PipelineAction::Create<pipeline1>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:pipeline1, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:120)", "org.logstash.execution.AbstractPipelineExt.initialize(AbstractPipelineExt.java:186)", "org.logstash.execution.AbstractPipelineExt$INVOKER$i$initialize.call(AbstractPipelineExt$INVOKER$i$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:847)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1318)", "org.jruby.ir.instructions.InstanceSuperInstr.interpret(InstanceSuperInstr.java:139)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:367)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)",?

?还是那个配置文件:

input {
  jdbc {
    jdbc_driver_library => "/home/test/logstash/mysql-connector-j-8.0.32.jar"
    jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
    jdbc_connection_string => "jdbc:mysql://localhost:3306/_test"
    jdbc_user => "root"
    jdbc_password => "root"
    #启用分页,默认false
    jdbc_paging_enabled => "true"
    #页面大小
    jdbc_page_size => "500"
    #是否记录上次运行的结果
    record_last_run => true
    #记录上次运行结果的文件位置
    last_run_metadata_path => "/usr/share/logstash/pipeline/lastvalue.txt"
    #是否使用数据库某一列的值,
    use_column_value => true
    tracking_column => "id"
    #numeric或者timestamp
    #tracking_column_type => "numeric"
    #如果为true则会清除last_run_metadata_path记录,即重新开始同步数据
    clean_run => false
    # sql语句文件,对于复杂的查询,可以放在文件中,如:
    # statement_filepath => "jdbc.sql"这个文件路径要跟jdbc.sql对应上
    #设置监听间隔。可以设置每隔多久监听一次什么的。
    #官方举例:
    #* 5 * 1-3 * 一月到三月的每天凌晨5点每分钟执行一次。
    #0 * * * *   将在每天每小时的第0分钟执行。
    #0 6 * * *   America/Chicago每天早上6:00(UTC / GMT -5)执行。
    #* * * * *   分、时、天、月、年,全部为*默认含义为每分钟查询一次
    schedule => "* * * * *"
    #索引类型
    #type => "jdbc"
    statement => "SELECT * FROM testORDER BY id ASC"
  }
}

output {
   elasticsearch {
                 hosts => "localhost:9200"
                 index => "test"
                 user => "elastic"
                 password => "elastic"
                 timeout => 3000
                 document_id => "%{id}"
                 template => "/home/test/logstash/config/test.json"
                 template_name => "test"
                 }
}

es模板,json文件:

{
  "index_patterns": ["test*"],
  "mappings": {
    "properties": {
      "id": { "type": "integer" },
      "accession": { "type": "keyword" },
      "name": { "type": "keyword" },
      "comment_text": { "type": "text" },
      "sequence": { "type": "text" },
      "keyword": { "type": "keyword" }
    }
  }
}
'

?这里我真真真是搞了好久,说下思路:

1.猜测权限问题,直接chmod 777 test.json
2.百度有的小伙伴说是斜杠问题,我换了\和/都不行
3.百度有的小伙伴分享是相对路径换成绝对路径,我都试了,还是不行

4.刚刚解决jar的时候,镇定下来,脑瓜一亮~~~~~
? ? ? ? 我用的docker啊,我那路径配的是本地路径,容器内读不到啊。
????????静下来的脑子才会动
????????于是我马上换上容器内的路径,好使了~?

文章来源:https://blog.csdn.net/qq_35716085/article/details/134945940
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。