亚洲精品亚洲人成在线观看麻豆,在线欧美视频一区,亚洲国产精品一区二区动图,色综合久久丁香婷婷

              當前位置:首頁 > IT技術 > Windows編程 > 正文

              Kafka快速入門系列(10) | Kafka的Consumer API操作
              2021-09-02 21:13:30

              ??本篇博主帶來的是Kafka的Consumer API操作。


              ??Consumer消費數(shù)據(jù)時的可靠性是很容易保證的,因為數(shù)據(jù)在Kafka中是持久化的,故不用擔心數(shù)據(jù)丟失問題。
              ??由于consumer在消費過程中可能會出現(xiàn)斷電宕機等故障,consumer恢復后,需要從故障前的位置的繼續(xù)消費,所以consumer需要實時記錄自己消費到了哪個offset,以便故障恢復后繼續(xù)消費。
              ??所以offset的維護是Consumer消費數(shù)據(jù)是必須考慮的問題。

              1. 手動提交offset
              • 1. 導入依賴
                  <build>
                      <plugins>
                          <plugin>
                              <groupId>org.apache.maven.plugins</groupId>
                              <artifactId>maven-compiler-plugin</artifactId>
                              <configuration>
                                  <source>8</source>
                                  <target>8</target>
                              </configuration>
                          </plugin>
                      </plugins>
              
                  </build>
              
              
                  <dependencies>
                  <dependency>
                      <groupId>org.apache.kafka</groupId>
                      <artifactId>kafka-clients</artifactId>
                      <version>0.11.0.2</version>
                  </dependency>
                  </dependencies>
              
              • 2. 編寫代碼

              需要用到的類:
              KafkaConsumer:需要創(chuàng)建一個消費者對象,用來消費數(shù)據(jù)
              ConsumerConfig:獲取所需的一系列配置參數(shù)
              ConsuemrRecord:每條數(shù)據(jù)都要封裝成一個ConsumerRecord對象

              package com.buwenbuhuo.kafka.consumer;
              
              import org.apache.kafka.clients.consumer.ConsumerConfig;
              import org.apache.kafka.clients.consumer.ConsumerRecord;
              import org.apache.kafka.clients.consumer.ConsumerRecords;
              import org.apache.kafka.clients.consumer.KafkaConsumer;
              import org.apache.kafka.common.serialization.StringDeserializer;
              
              import java.util.Arrays;
              import java.util.Properties;
              
              /**
               * @author 卜溫不火
               * @create 2020-05-06 23:22
               * com.buwenbuhuo.kafka.consumer - the name of the target package where the new class or interface will be created.
               * kafka0506 - the name of the current project.
               */
              public class CustomConsumer {
              
                  public static void main(String[] args) {
              
                      Properties props = new Properties();
                      props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop002:9092");
                      props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
                      props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
                      props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
              
                      props.put(ConsumerConfig.GROUP_ID_CONFIG, "bigData-0507");
                      props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,true);
                      props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,3000);
              
                      //1.創(chuàng)建1個消費者
                      KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
                      consumer.subscribe(Arrays.asList("first"));
              
                      //2.調(diào)用poll
                   try {
              
                       while (true) {
                           ConsumerRecords<String, String> records = consumer.poll(100);
                           for (ConsumerRecord<String, String> record : records) {
                           System.out.println("record = " + record);
              
                           }
                       }
                   }finally {
                       consumer.close();
                   }
                  }
              }
              
              
              • 3. 結(jié)果呈現(xiàn)

              Kafka快速入門系列(10) | Kafka的Consumer API操作_數(shù)據(jù)

              • 4. 代碼分析

              ??手動提交offset的方法有兩種:分別是commitSync(同步提交)和commitAsync(異步提交)。兩者的相同點是,都會將本次poll的一批數(shù)據(jù)最高的偏移量提交;不同點是,commitSync會失敗重試,一直到提交成功(如果由于不可恢復原因?qū)е拢矔峤皇。?;而commitAsync則沒有失敗重試機制,故有可能提交失敗。

              • 5. 此為異步提交代碼
              package com.buwenbuhuo.kafka.consumer;
              
              import org.apache.kafka.clients.consumer.ConsumerConfig;
              import org.apache.kafka.clients.consumer.ConsumerRecord;
              import org.apache.kafka.clients.consumer.ConsumerRecords;
              import org.apache.kafka.clients.consumer.KafkaConsumer;
              import org.apache.kafka.common.serialization.StringDeserializer;
              
              import java.util.Arrays;
              import java.util.Properties;
              
              /**
               * @author 卜溫不火
               * @create 2020-05-06 23:22
               * com.buwenbuhuo.kafka.consumer - the name of the target package where the new class or interface will be created.
               * kafka0506 - the name of the current project.
               */
              public class CustomConsumer {
              
                  public static void main(String[] args) {
              
                      Properties props = new Properties();
                      props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop002:9092");
                      props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
                      props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
                      props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
              
                      props.put(ConsumerConfig.GROUP_ID_CONFIG, "bigData-0507");
                      props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,false);
              
              
                      //1.創(chuàng)建1個消費者
                      KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
                      consumer.subscribe(Arrays.asList("second"));
              
                      //2.調(diào)用poll
                   try {
              
                       while (true) {
                           ConsumerRecords<String, String> records = consumer.poll(100);
                           for (ConsumerRecord<String, String> record : records) {
                           System.out.println("record = " + record);
                           }
                           // 異步提交
                           consumer.commitAsync();
                       }
                   }finally {
                       consumer.close();
                   }
                  }
              }
              
              • 6. 此為同步提交代碼
              package com.buwenbuhuo.kafka.consumer;
              
              import org.apache.kafka.clients.consumer.ConsumerConfig;
              import org.apache.kafka.clients.consumer.ConsumerRecord;
              import org.apache.kafka.clients.consumer.ConsumerRecords;
              import org.apache.kafka.clients.consumer.KafkaConsumer;
              import org.apache.kafka.common.serialization.StringDeserializer;
              
              import java.util.Arrays;
              import java.util.Properties;
              
              /**
               * @author 卜溫不火
               * @create 2020-05-06 23:22
               * com.buwenbuhuo.kafka.consumer - the name of the target package where the new class or interface will be created.
               * kafka0506 - the name of the current project.
               */
              public class CustomConsumer {
              
                  public static void main(String[] args) {
              
                      Properties props = new Properties();
                      props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop002:9092");
                      props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
                      props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
                      props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
              
                      props.put(ConsumerConfig.GROUP_ID_CONFIG, "bigData-0507");
                      props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,false);
              
              
                      //1.創(chuàng)建1個消費者
                      KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
                      consumer.subscribe(Arrays.asList("second"));
              
                      //2.調(diào)用poll
                   try {
              
                       while (true) {
                           ConsumerRecords<String, String> records = consumer.poll(100);
                           for (ConsumerRecord<String, String> record : records) {
                           System.out.println("record = " + record);
                           }
                           // 同步提交
                           consumer.commitSync();
                       }
                   }finally {
                       consumer.close();
                   }
                  }
              }
              
              • 7. 結(jié)果圖

              Kafka快速入門系列(10) | Kafka的Consumer API操作_kafka_02

              2. 自動提交offset

              ??為了使我們能夠?qū)W⒂谧约旱臉I(yè)務邏輯,Kafka提供了自動提交offset的功能。
              自動提交offset的相關參數(shù):
              enable.auto.commit:是否開啟自動提交offset功能
              auto.commit.interval.ms:自動提交offset的時間間隔

              • 1. 代碼
              package com.buwenbuhuo.kafka.consumer;
              
              import org.apache.kafka.clients.consumer.ConsumerConfig;
              import org.apache.kafka.clients.consumer.ConsumerRecord;
              import org.apache.kafka.clients.consumer.ConsumerRecords;
              import org.apache.kafka.clients.consumer.KafkaConsumer;
              import org.apache.kafka.common.serialization.StringDeserializer;
              
              import java.util.Arrays;
              import java.util.Properties;
              
              /**
               * @author 卜溫不火
               * @create 2020-05-06 23:22
               * com.buwenbuhuo.kafka.consumer - the name of the target package where the new class or interface will be created.
               * kafka0506 - the name of the current project.
               */
              public class CustomConsumer {
              
                  public static void main(String[] args) {
                      Properties props = new Properties();
                      props.put("bootstrap.servers", "hadoop002:9092");
                      props.put("group.id", "test");
                      props.put("enable.auto.commit", "true");
                      props.put("auto.commit.interval.ms", "1000");
                      props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
                      props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
                      KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
                      consumer.subscribe(Arrays.asList("second"));
                      while (true) {
                          ConsumerRecords<String, String> records = consumer.poll(100);
                          for (ConsumerRecord<String, String> record : records)
                              System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
                      }
                  }
              }
              
              
              • 2. 運行結(jié)果
                Kafka快速入門系列(10) | Kafka的Consumer API操作_數(shù)據(jù)_03
              3. 自己維護offset
              • 1. 代碼
              package com.buwenbuhuo.kafka.consumer;
              
              import org.apache.kafka.clients.consumer.ConsumerRebalanceListener;
              import org.apache.kafka.clients.consumer.ConsumerRecord;
              import org.apache.kafka.clients.consumer.ConsumerRecords;
              import org.apache.kafka.clients.consumer.KafkaConsumer;
              import org.apache.kafka.common.TopicPartition;
              
              import java.util.Arrays;
              import java.util.Collection;
              import java.util.Properties;
              
              /**
               * @author 卜溫不火
               * @create 2020-05-07 15:16
               * com.buwenbuhuo.kafka.consumer - the name of the target package where the new class or interface will be created.
               * kafka0506 - the name of the current project.
               */
              public class CustomOffsetConsumer {
              
                  public static void main(String[] args) {
              
                      Properties props = new Properties();
                      props.put("bootstrap.servers", "hadoop002:9092");
                      props.put("group.id", "test");//消費者組,只要group.id相同,就屬于同一個消費者組
                      props.put("enable.auto.commit", "false");//自動提交offset
                      props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
                      props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
                      KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
                      consumer.subscribe(Arrays.asList("second"), new ConsumerRebalanceListener() {
              
                          //提交當前負責的分區(qū)的offset
                          @Override
                          public void onPartitionsRevoked(Collection<TopicPartition> partitions) {
                              System.out.println("==========回收的分區(qū)===========");
                              for (TopicPartition partition : partitions){
                                  System.out.println("partition = " + partition);   // 啥也沒干不寫東西也可以
                              }
              
                          }
              
                          //定位新分配的分區(qū)的offset
                          @Override
                          public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
                              System.out.println("========重新得到的分區(qū)===========");
                              for (TopicPartition partition : partitions) {
                                  Long offset = getPartitionOffset(partition);
                                  consumer.seek(partition,offset);
                              }
                          }
                      });
              
              
                      while (true) {
                          ConsumerRecords<String, String> records = consumer.poll(100);
                          for (ConsumerRecord<String, String> record : records) {
              
                              System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
                              TopicPartition topicPartition = new TopicPartition(record.topic(), record.partition());
                              commitOffset(topicPartition,record.offset()+1);
                          }
                      }
                  }
              
                  private static void commitOffset(TopicPartition topicPartition, long l) {
              
                  }
              
                  private static Long getPartitionOffset(TopicPartition partition) {
                      return null;
                  }
              
              }
              
              • 2. 結(jié)果
                Kafka快速入門系列(10) | Kafka的Consumer API操作_kafka_04

              ??本次的分享就到這里了,


              Kafka快速入門系列(10) | Kafka的Consumer API操作_大數(shù)據(jù)_05

              ?? 看 完 就 贊 , 養(yǎng) 成 習 慣 ! ! ! color{#FF0000}{看完就贊,養(yǎng)成習慣?。?!} ,養(yǎng)!!^ _ ^ ?? ?? ??
              ??碼字不易,大家的支持就是我堅持下去的動力。點贊后不要忘了關注我哦!

              本文摘自 :https://blog.51cto.com/u

              開通會員,享受整站包年服務立即開通 >