Kafka Producer Exactly-once yazısında örnek var
Örnek - Chained Transaction
Bir örnek burada
mTLS enables clients to authenticate servers, and servers to reciprocally authenticate clients.Kafka supports other authentication mechanisms, like OAuth, or Salted Challenge Response Authentication Mechanism (SCRAM), but we chose mTLS because it is able to verify the peer’s identity offline. This verification ability means that systems do not need an active connection to an authentication server to ascertain the identity of a peer. This enables operating in disparate network environments, where all parties do not necessarily have access to such a central authority.
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
docker pull confluentinc/cp-kafka-connect:latestdocker run \-it \--rm \--name es-sink-connector \-e CONNECT_BOOTSTRAP_SERVERS=kafka:9092 \-e CONNECT_REST_PORT=8083 \-e CONNECT_GROUP_ID="connect-cluster" \-e CONNECT_CONFIG_STORAGE_TOPIC="connect-configs" \-e CONNECT_OFFSET_STORAGE_TOPIC="connect-offsets" \-e CONNECT_STATUS_STORAGE_TOPIC="connect-status" \-e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \-e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \-e CONNECT_REST_ADVERTISED_HOST_NAME="localhost" \-e CONNECT_PLUGIN_PATH="/usr/share/java,/etc/kafka-connect/jars" \-p 8083:8083confluentinc/cp-kafka-connect:latest
JMX Exporter gives you the metrics of each individual broker, such as memory, GC and Kafka external metrics (kafkajmx.* in Wavefront), while
Kafka Exporter gives you the metrics of the overall state in the cluster, such as the offsets of partitions (kafka.* in Wavefront).
The duality between KStream and KTable lies in the fact that you can convert between these two representations, allowing you to switch between stream-like and table-like processing as needed:KStream to KTable: You can convert a KStream into a KTable using operations like groupByKey followed by aggregation. This is useful when you want to create a table from a stream, aggregating data over a period of time.KTable to KStream: Conversely, you can convert a KTable into a KStream. This allows you to turn a table-like representation into a stream for further processing or for joining with other streams.
<KR> KGroupedStream<KR,V> groupBy(KeyValueMapper<? super K,? super V,KR> keySelector)<KR> KGroupedStream<KR,V> groupBy(KeyValueMapper<? super K,? super V,KR> keySelector, Grouped<KR,V> grouped)KGroupedStream<K,V> groupByKey()KGroupedStream<K,V> groupByKey(Grouped<K,V> grouped)
KTable<K,Long> count()
Giriş Bir topic'i dinleyen consumer'ları gösterir. Aynı topic'i dinleyen consumer group'ları olabilir. Her topic farklı part...