@sebarys wrote:
Hello everyone,
We’ve tried to use alpakka kafka library (https://doc.akka.io/docs/alpakka-kafka/current/home.html) to implement following scenario:
- some application parts must send messages to Kafka topic (domain events)
- each place which request message publication to Kafka should be notified either about publication success or failure and handle both cases properly - as publication is I/O operation and we don’t want block ideally it would be to receive
CompletableFuture
, interface of such request could be:CompletableFuture<KafkaProducerResult> publishMessageToKafka(final DomainEvent domainEvent)
Based on existing components we designed following stream:
Source.<Tuple2<DomainEvent, CompletableFuture<KafkaProducerResult>>>queue(QUEUE_BUFFER_SIZE, OverflowStrategy.backpressure()) .map(tuple -> mapToProducerMessage(topicName, tuple)) .via(Producer.flexiFlow(kafkaProducerSettings)) .map(result -> { final CompletableFuture<KafkaProducerResult> completableFuture = result.passThrough(); return completableFuture.complete(new KafkaProducerResult(KafkaProducerResult.Result.PUBLISHED)); }) .to(Sink.ignore()) .run(system);
Such solution works well in happy path scenarios, but doesn’t allow to complete
CompletableFuture
in case of publication failures.
Is it possible somehow to receive fromProducer.flexiFlow
( or any other producer component available in library ) result of publication - including publication failures with providedPassThrough
element?
I think that presented use case is very common for production usages where we want handle both possible results of message publication.Thank you in advance for any help and suggestions!
Posts: 2
Participants: 1