Quantcast
Channel: Akka Libraries - Discussion Forum for Akka technologies
Viewing all 1359 articles
Browse latest View live

Deadlock in Graph with Partition/Merge Nested in Broadcast/Zip

$
0
0

Hello,

I am trying to build a graph modelling a case statement, with the added capability to combine the output of the merge with the input of the partition.
The graph basically looks like this:

+-----------+      +-----------+      +-------+      +-------+      +-----+
| Broadcast |--+-->| Partition |--+-->| Flow1 |--+-->| Merge |--+-->| Zip |
+-----------+  |   +-----------+  |   +-------+  |   +-------+  |   +-----+
               |                  |              |              |
               |                  |   +-------+  |              |
               |                  +-->| Flow2 |--+              |
               |                  |   +-------+  |              |
               |                  |              |              |
               |                        ...                     |
               |                  |              |              |
               |                  |   +-------+  |              |
               |                  +-->| FlowN |--+              |
               |                      +-------+                 |
               |                                                |
               +------------------------------------------------+

It works as I expect it to, as long as Flow1, Flow2, … do not create and merge substreams - in which case the composed flow deadlocks.

The graph is created with:

def when1[A, B, C](partition: A => Int, actions: List[Flow[A, B, NotUsed]], map: (A, B) => C): Flow[A, C, NotUsed] = {

    Flow.fromGraph(GraphDSL.create() { implicit builder =>

      val b = builder.add(Broadcast[A](2))
      val p = builder.add(Partition(actions.size, partition))
      val m = builder.add(Merge[B](actions.size))
      actions.foreach(p ~> builder.add(_) ~> m)
      val agg = builder.add(Flow[(A, B)].map(x => map(x._1, x._2)))
      val z = builder.add(Zip[A, B])

      b ~> z.in0
      b ~> p
      m ~> z.in1
      z.out ~> agg

      FlowShape(b.in, agg.out)
    })
  }

I want to treat the flows used to parametrize the branches as black boxes. What am I overlooking?

6 posts - 2 participants

Read full topic


How to look up Actor by path?

$
0
0

I am looking for a way to lookup the local snapshot store actor in order to delete snapshots, because in Akka typed I have no API anymore to clean-up ALL events when a persistent actor is not needed anymore. See How to remove/clean-up state of EventSourcedBehavior when task done?

My current approach is to lookup the persistent actor and send a DeleteSnapshots message to it. This is fine. But the code looks really hacky and I am wondering if there is a better way to lookup the actor. My code looks like this:

ActorPath path = _context.getSelf().path().root().child("system").child("akka.persistence.snapshot-store.local");
ActorSelection actorSelection = _context.classicActorContext().actorSelection(path);
actorSelection.tell(new SnapshotProtocol.DeleteSnapshots(persistenceId().id(), SnapshotSelectionCriteria.latest()),
          Adapter.toClassic(_context.getSelf()));

Any suggestion?

1 post - 1 participant

Read full topic

Typed group route: how to add routees

$
0
0

I have tried this example but think that I misunderstand both the example and the documentation. I have tried to adapt the example but get:

Message [concurrency.StatsWorker$Available] to Actor[akka://WorkStealSystem/user/WorkerRouter#530679135] was dropped. No routees in group router for [ServiceKey[concurrency.StatsWorker$Command](StatsService)]. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

My question is: do I need to use Routers.pool() to add routees to the group router or is their some other way to do this? I have the following:

    val groupRouter: GroupRouter[StatsWorker.Command] = Routers.group(App.StatsServiceKey)
    val serviceRouter: ActorRef[StatsWorker.Command] = ctx.spawn( groupRouter, "WorkerRouter")

    val worker = StatsWorker(serviceRouter)
    val service = ctx.spawn(worker, "WorkerService")

    // published through the receptionist to the other nodes in the cluster
    ctx.system.receptionist ! Receptionist.Register(StatsServiceKey, service)

And in the StatsWorkerI have:

  def apply(serviceRouter: ActorRef[StatsWorker.Command]): Behavior[Command] =
    Behaviors.setup { ctx =>
      ctx.log.info("Worker starting up")
      ctx.log.info("Sending available")
      serviceRouter ! Available(ctx.self)
      ctx.log.info("Waiting for job")
      waitJob(ctx)
  }

which results on the above log message.

My objective is to have a set of actors register themselves so that they be contacted by another set of actors. More concretely can I assume that if I use:

    val groupRouter: GroupRouter[StatsWorker.Command] = Routers.group(App.StatsServiceKey)
    val serviceRouter: ActorRef[StatsWorker.Command] = ctx.spawn( groupRouter, "WorkerRouter")

    val worker = AnotherWorker(serviceRouter)
    val service = ctx.spawn(worker, "WorkerService")

the AnotherWorker actor can send messages to the group?

My apologies if I am completely off the mark, but I am a beginner.
TIA

1 post - 1 participant

Read full topic

What is the ActorSystem used for in Akka HTTP Http() constructor?

$
0
0

The Http() constructor to bind a route to an IP and port (in effect, create a server) takes in a typed ActorSystem as an implicit parameter. Can I get an idea of what this actor system is used for?

Specifically, in the context of my application, I have an actor system used to spawn actors and do things, so it has a specific Behavior. I am wondering if I can use it safely in the Http() constructor, or do I have to create a separate system for that? The docs use Behaviors.empty, is that required?

1 post - 1 participant

Read full topic

Supervisor restart ends in an inactive actor: messages sent to dead letter

$
0
0

I am experimenting with a tiny cluster with 1-2 Producer seed-nodes:

      "akka://WorkStealSystem@127.0.0.1:25251",
      "akka://WorkStealSystem@127.0.0.1:25252"

I have another Consumer actor that subscribes to a Receptionist, selects one of these Producers and they exchange messages repeatedly. At one point I kill the only working seed (25251). This is detected by the Consumer and results in the following message:

17:50:54.939 [WorkStealSystem-work-dispatcher-16] INFO  concurrency.Utils$ - concurrency.Consumer$ @processJobs: Selecting producer.
17:50:54.942 [WorkStealSystem-work-dispatcher-16] ERROR concurrency.Utils$ - Supervisor RestartSupervisor saw failure: bound must be positive
java.lang.IllegalArgumentException: bound must be positive
	at java.util.Random.nextInt(Random.java:388) ~[?:?]
	at scala.util.Random.nextInt(Random.scala:96) ~[scala-library-2.13.3.jar:?]
	at concurrency.Consumer$.selectRandomly(Consumer.scala:66) ~[classes/:?]
	at concurrency.Consumer$.processJobs$$anonfun$1(Consumer.scala:102) ~[classes/:?]
	at akka.actor.typed.internal.BehaviorImpl$DeferredBehavior$$anon$1.apply(BehaviorImpl.scala:119) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.Behavior$.start(Behavior.scala:168) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.Behavior$.interpret(Behavior.scala:275) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.Behavior$.interpretMessage(Behavior.scala:230) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.InterceptorImpl$$anon$2.apply(InterceptorImpl.scala:57) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.RestartSupervisor.aroundReceive(Supervision.scala:261) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.InterceptorImpl.receive(InterceptorImpl.scala:85) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.Behavior$.interpret(Behavior.scala:274) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.Behavior$.interpretMessage(Behavior.scala:230) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.adapter.ActorAdapter.handleMessage(ActorAdapter.scala:129) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.adapter.ActorAdapter.$anonfun$adaptAndHandle$2(ActorAdapter.scala:178) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.adapter.ActorAdapter.$anonfun$adaptAndHandle$2$adapted(ActorAdapter.scala:178) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.adapter.ActorAdapter.withSafelyAdapted(ActorAdapter.scala:189) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.adapter.ActorAdapter.handle$1(ActorAdapter.scala:178) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.adapter.ActorAdapter.adaptAndHandle(ActorAdapter.scala:183) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.adapter.ActorAdapter.$anonfun$aroundReceive$2(ActorAdapter.scala:97) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.adapter.ActorAdapter.$anonfun$aroundReceive$2$adapted(ActorAdapter.scala:95) ~[akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.adapter.ActorAdapter.withSafelyAdapted(ActorAdapter.scala:189) [akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.typed.internal.adapter.ActorAdapter.aroundReceive(ActorAdapter.scala:95) [akka-actor-typed_2.13-2.6.9.jar:2.6.9]
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:577) [akka-actor_2.13-2.6.9.jar:2.6.9]
	at akka.actor.ActorCell.invoke(ActorCell.scala:547) [akka-actor_2.13-2.6.9.jar:2.6.9]
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) [akka-actor_2.13-2.6.9.jar:2.6.9]
	at akka.dispatch.Mailbox.run(Mailbox.scala:231) [akka-actor_2.13-2.6.9.jar:2.6.9]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]

The Consumer then restarts and after initialization I get the error:

17:50:54.956 [WorkStealSystem-akka.actor.default-dispatcher-6] INFO  akka.remote.artery.Association - Association to [akka://WorkStealSystem@127.0.0.1:25251] having UID [4584493602840708540] has been stopped. All messages to this UID will be delivered to dead letters. Reason: ActorSystem terminated
17:

The actor then gets a time-out as expected and I get this output:

17:50:56.948 [WorkStealSystem-work-dispatcher-16] INFO  concurrency.Utils$ - concurrency.Consumer$: @Init Unexpected consumer message = |ResponseFailure(java.util.concurrent.TimeoutException: Ask timed out on [Actor[akka://WorkStealSystem@127.0.0.1:25251/user/producer#-83987191]] after [3000 ms]. Message of type [concurrency.Consumer$Available]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply.)|
17:51:04.838 [WorkStealSystem-akka.actor.default-dispatcher-6] WARN  akka.remote.artery.Association - Outbound control stream to [akka://WorkStealSystem@127.0.0.1:25252] failed. Restarting it. akka.remote.artery.OutboundHandshake$HandshakeTimeoutException: Handshake with [akka://WorkStealSystem@127.0.0.1:25252] did not complete within 20000 ms

At this point the Consumer justy sits theire and does nothing. If I launch a new Consumer it will find the Producers and work correctly.

So my question is: how can I revive the Consumer? Note that I am still using the same behaviour but get no new updates on the cluster members. I assume that this is because all messages are sent to the dead-letter mailbox.

TIA

1 post - 1 participant

Read full topic

Instrument Source.queue

$
0
0

:wave: everybody!

I’d like to gauge the saturation of the buffer maintained by the Source returned by Source.queue. Is there any known pattern (or out-of-the-box component) to achieve this?

I’m under the impression that this is not really possible and that the cleanest way to do this (without re-implementing the QueueSource graph stage) is to use Source.actorRef with bufferSize = 0 and maintain (and gauge) the queue inside it.

3 posts - 1 participant

Read full topic

Akka HTTP 10.2.1 Released

$
0
0

Dear hakkers,

We are happy to announce the 10.2.1 release of Akka HTTP. This release is the first update in the 10.2.x series of Akka HTTP.

Changes since 10.2.0

For a full overview you can also see the 10.2.1 milestone. Notably, we have improved source-compatibility with Akka HTTP 10.1.x #3489 and re-introduced the lingerTimeout option #3456

akka-http-core

  • Reenable lingerTimeout #3456
  • Make entities of HEAD requests discard #3440
  • Fix HTTPS proxy CONNECT request #3434
  • Expose the SslSession via an attribute #3472
  • Add debug logging when Websocket impl closes connections after timeout #3431
  • Spelling in reference.conf #3425
  • Remove unused import in Handshake class #3464
  • Tests: use copy of SocketUtil that does not use 127.x.y.255 addresses #3460

akka-http

  • Provide implicit conversion from route and materializer to flow #3489
  • Fail storeUploadedFile(s) directive when IO operation fails #3437
  • Fix Route.handlerFlow deprecation message #3465
  • Fix a couple of warnings #3482
  • Clean up some imports #3457

docs

  • Docs for getting data from a strict entity in a client #3446
  • Fix links to httpsServer / httpsClient #3453
  • Fix typo in deprecation message #3424
  • Update Scala style guide example to match the Java one (remove duplicated “customer” path) #3448
  • Document how to disable hostname verification without ssl-config #3483
  • Fix introduction markup #3476
  • Update head documentation about default behavior #3480
  • Document requiring client authentication #3492
  • Update websocket docs to describe attribute #3488

akka-http2-support

  • Ignore unexpected DATA frames in state closed #3462
  • Improve HTTP2 debug logging #3467

build

  • Set up mima for 10.2 #3408
  • Simplify http-core test against akka master scenario #3402
  • Increase paradox parsing timeout #3430
  • Update to Scala 2.12.12 #3420
  • Update github-api from 1.115 to 1.116 #3443
  • Update sbt-jmh from 0.3.7 to 0.4.0 #3471
  • Update sbt-mima-plugin from 0.7.0 to 0.8.0 #3475
  • Update sbt-scalafix, scalafix-core, … from 0.9.20 to 0.9.21 #3479
  • Update scalatest from 3.1.3 to 3.1.4 #3454
  • Update silencer-lib, silencer-plugin from 1.7.0 to 1.7.1 #3397
  • Update specs2-core from 4.10.2 to 4.10.3 #3445

Credits

The complete list of closed issues can be found on the 10.2.1 milestone on GitHub.

For this release we had the help of 9 contributors – thank you all very much!

commits  added  removed
     14    490      117 Johannes Rudolph
     12    606      209 Arnout Engelen
      1     12       12 Damian Bronecki
      1      2        2 Paul-Guillaume Déjardin
      1      2        2 jczuchnowski
      1      2        2 Age Mooij
      1      1        1 Josep Prat
      1      1        1 KiranKumar BS
      1      1        1 Guillaume Massé

Akka by Lightbend

The Akka core team is employed by Lightbend. If you’re looking to take your Akka systems to the next level, let’s set up a time to discuss our enterprise-grade expert support, self-paced education courses, and technology enhancements that help you manage, monitor and secure your Akka systems - from development to production.

Happy hakking!

– The Akka Team

1 post - 1 participant

Read full topic

Thread safety of Source.queue

$
0
0

Is it safe to do something like this:

val (queue, source) = Source.queue[T](bufferSize, OverflowStrategy.backpressure).preMaterialize()


Future {
  // in reality, the queue would be passed off somewhere else, where items would be added to the 
  // queue from a different thread as events come in.
  (1...10).foreach(queue.offer(_))
  queue.complete()
}

doSomethingWithSource(source)

The documentation isn’t particularly clear on that.

2 posts - 2 participants

Read full topic


Dynamically decide on whether to do Elasticsearch retries based on error messages

$
0
0

With akka-stream-alpakka-elasticsearch version 1.1.2 there was an option to implement RetryLogic which allowed us to override boolean shouldRetry(final int retries, final Seq<String> errors) method.

That gave us an option to dynamically decide based on an error message whether or not retry should be made for a particular document. For example if the error type is version_conflict_engine_exception we don’t want to retry while we want to retry for other errors.

Can something similar be achieved with 2.0.2 release? I was thinking about wrapping the flow with RetryFlow for which can define a decider function but not sure if that’s recommended for our use case.

Any advice is highly appreciated!

This is how the flow looks like:

Flow<Record<T, C>, Record<T, C>, NotUsed> sinkFlow = Flow.create();

            Flow<Record<T, C>, Record<T, C>, NotUsed> sink = sinkFlow
                .map(record -> {
                    ElasticsearchDocument doc = recordToDocument.apply(record);

                    WriteMessage<Object, Record<T, C>> message = recordToWriteMessage
                        .apply(record, doc)
                        .withPassThrough(record);

                    if (doc.indexName != null) {
                        message = message.withIndexName(doc.indexName);
                    } else {
                        message = message.withIndexName(configuration.defaultIndexName);
                    }

                    return message;
                })
                .via(ElasticsearchFlow.createWithPassThrough(
                    configuration.defaultIndexName,
                    configuration.indexType,
                    sinkSettings,
                    client,
                    JsonUtil.JSON_MAPPER))
                .map(result -> {
                    if (result.getError().isPresent()) {
                        LOG.warn(
                            "Error while publishing message to Elasticsearch: {}", result.getErrorReason());
                    }

                    return result.message().passThrough();
                });

1 post - 1 participant

Read full topic

Can we obtain the Cluster.Member from an ActorRef?

$
0
0

Given an ActorRef received from a message, can we determine the Cluster.Member?

TIA

4 posts - 3 participants

Read full topic

Akka Typed version cluster sharding with distributed pubsub

How can we obtain an ActorContext with the Asynchronous testing API

$
0
0

(topic withdrawn by author, will be automatically deleted in 24 hours unless flagged)

1 post - 1 participant

Read full topic

How to migrate my event stream code to 2.6.9

$
0
0

Hi

I am in the process of migrating my Akka project from 2.5 to 2.6.9. But there doesn’t seem to be any mention of the eventStream in the documentation for 2.6.9 that I can find. Can someone advise me as to how I should be doing it or direct me to the relevant documentation? I would imagine there would be issues with type safety that did not occur in version 2.5 for example

Thanks
Des

1 post - 1 participant

Read full topic

Failing in one branch of the akka stream did not failed execution of another branch

$
0
0

I have quite simple Akka Graph with one source and 2 sinks.
What I expect if sink in one branch fails it should stop executing second branch, but it’s not what I see in reality. In reality second branch is continuing to consume rest of the tuples. Do I miss anything?

     Sink<Integer, CompletionStage<Done>> topHeadSink = Sink.foreach(o ->  {
      System.out.println("Upper sink " + o);
      throw new RuntimeException("UPS");
    });
    Sink<Integer, CompletionStage<Done>> bottomHeadSink = Sink.foreach( o -> System.out.println("Bottom sink " + o));

    final RunnableGraph<Pair<CompletionStage<Done>, CompletionStage<Done>>> g =
        RunnableGraph.<Pair<CompletionStage<Done>, CompletionStage<Done>>>fromGraph(
            GraphDSL.create(
                topHeadSink, // import this sink into the graph
                bottomHeadSink, // and this as well
                Keep.both(),
                (b, top, bottom) -> {
                  final UniformFanOutShape<Integer, Integer> bcast = b.add(Broadcast.create(2));

                  b.from(b.add(Source.from(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10))))
                      .viaFanOut(bcast)
                      .to(top);
                  b.from(bcast).toInlet(bottom.in());
                  return ClosedShape.getInstance();
                }));

    g.run(_runner.actorSystem());

Result of executing this graph is

Upper sink 1
Bottom sink 1
Bottom sink 2
Bottom sink 3
Bottom sink 4
Bottom sink 5
Bottom sink 6
Bottom sink 7
Bottom sink 8
Bottom sink 9
Bottom sink 10

2 posts - 2 participants

Read full topic

Akka 2.6.10 released

$
0
0

Dear hakkers,

We are excited to announce a new patch release of Akka 2.6. Notable changes relative to 2.6.9 include:

  • Improvements of rolling updates and rebalance in Cluster Sharding, see below
  • Quick dissemination of downing decisions, #29612
  • Log message sizes in Artery, #29683
  • Configurable stream restart deadline, thanks to @r-glyde, #29291
  • Config for when to move to WeaklyUp, #29665
  • Deliver Terminated after ordinary messages in Artery, #28695
  • Support async reply in EventSourcedBehaviorTestKit, #29602
  • Update Aeron to 1.30.0

As well as some important bug fixes:

  • Proper threadsafe collection of stream snapshots, #28960
  • Use correct heartbeat-interval for cross-dc failure detection, #29614

2.6.10 includes 53 closed issues. The complete list can be found on the 2.6.10 milestone on github.

Sharding improvements

Akka 2.6.10 includes several improvements of Cluster Sharding for better rolling updates and a new faster rebalance algorithm.

To make rolling updates as smooth as possible there is a configuration property that defines the version of the application. This is used by rolling update features to distinguish between old and new nodes, and avoid allocation to old nodes during a rolling update. See the documentation for how to enable this feature.

There is also a new health check for Cluster Sharding that you can enable when you don’t want to receive production traffic until the local shard region is ready to retrieve locations for shards.

The new rebalance algorithm can reach optimal balance in a few rebalance rounds (typically 1 or 2 rounds). You can still configure limits of how many shards to move in each round. Compared to the old algorithm that would only move one shard in each round (every 10 seconds). The new algorithm is recommended and will become the default in future versions of Akka, but currently you have to enable it explicitly as described in the documentation.

Credits

For this release we had the help of 16 committers – thank you all very much!

commits  added  removed
     23   3156      823 Patrik Nordwall
      9     17       24 yiksanchan
      6    995      137 Johan Andrén
      5    382      142 Christopher Batey
      5    159       57 Renato Cavalcanti
      4    145       14 Muskan Gupta
      3    321        8 Ignasi Marimon-Clos
      3     10        3 Arnout Engelen
      1    688      308 r-glyde
      1     24        9 Adrian
      1      3        3 Seth Tisue
      1      1        2 Johannes Rudolph
      1      1        1 Evan Chan
      1      1        1 Josep Prat
      1      1        1 Upapan Vongkiatkachorn
      1      1        1 Stefano Baghino

Lightbend employs the Akka core team. If you’re looking to take your Akka systems to the next level, let’s set up a time to discuss our enterprise-grade expert support, self-paced education courses, and technology enhancements that help you manage, monitor and secure your Akka systems - from development to production.

Happy hakking!

– The Akka Team

1 post - 1 participant

Read full topic


How to build a reusable extention for Akka appliactions?

$
0
0

Hi.

I’ve written a simple Akka application with one Actor class. In this application, I have a manual send (!) function. There is a priority among messages as control messages are required for the manual send function to proceed. My goal is to make an extension from this application so that I can use it in other Akka applications too. Is this possible? My concern is that there are control messages involved in the function’s behavior. Can I somehow make a jar file from my application classes and include them in desired applications besides the Akka library or should I use Akka Extension?

I appreciate your guidance!

1 post - 1 participant

Read full topic

Using Flow as Sink/Source

$
0
0

Hi All,

I’m having a bit of difficulty combining some akka-streams components. The goal is to be able to return a Sink[ByteString, _] that other components can use, but I’ve been unable to figure out how to expose the source and sink sides of a flow correctly. My code is roughly as follows:

def getOutbound(): Future[Sink[ByteString, _]] = {
  Sink.fromMaterializer { (mat, attr) =>
    Sink.futureSink {
      val flow = Flow[ByteString]
      val source = Source.fromGraph(flow) // this is incorrect

      val create = for {
        thing <- Marshal(source).to[RequestEntity] // need to provide Source[ByteString, _]
        // make http request
      } yield {
        Sink.fromGraph(flow) // this is incorrect
      }
    }
  }
}

Can anyone give me some pointers on the correct way to do this? My gut feel is that it shouldn’t require anything exotic, but what I really want to do is treat the HTTP request marshaller as a sink so that I can just say something like Flow[ByteString].to(myOutboundRequest).

1 post - 1 participant

Read full topic

What is the best way to send actor updates to SSE?

$
0
0

My project is a set of nodes. Each node is an actor. A user can view a set of nodes (in a web page). Each node can have its state updated independent of user action.

(For instance, a user A might only be viewing nodes 2-4. Node 1 could receive a message from a different user. Node 1 would then propagate the message, through actor messages, resulting in state changes to nodes 3 and 4. User A should see those changes to 3 and 4.)

I want the user to see the accurate state of the node as it changes. Therefore, I want the server to transmit events to the client when these node state changes occur.

I was thinking I’d use Server Sent Events (SSE). When a user/client connects, it would indicate which set of nodes it is viewing. When the user/client changes which set of nodes it is viewing, it would communicate that to the backend; dropping some and adding others. The SSE connection would transmit state changes for all the nodes that the user/client is currently viewing.

I know how to set up the route thanks to https://doc.akka.io/docs/akka-http/10.0/sse-support.html .

I am wondering the best way to communicate the actor state changes to the route?

I am using akka cluster sharding and persistence on Akka 2.5 currently, but will update to 2.6 at some point. Each actor is identified by a uuid.

It sounds like I should use one EventBus across the entire cluster? And then any time an actor’s state changes, it would eventBus.publish(Msg(uuid, state)) ?

It sounds like EventStream is not an option since I am using Cluster. Is that also true for EventBus in general? Should I be looking to Distributed Pub/Sub or an external message queue/stream instead?

Finally, either way, is there an example of hooking that up to SSE in the akka-http route? Is this mostly a matter of creating some sort of listener actor for that “view” of nodes that the user cares about, and then somehow hooking that up to the Source that SSE needs?

2 posts - 2 participants

Read full topic

Commiting offsets for Kafka messages that are filtered

$
0
0

Hi, I’m using CommitableSource and I need at-least-once processing guarantees. I need to filter some messages from Kafka so I was wondering how to commit offsets for messages that are filtered out?
My pipeline looks something like this:

 source
      .throttle(25, 1.second)
      .filter(predicate)
      .groupedWithin(25, 5.seconds)
      .mapAsync(1) { batch =>
          processAsync(batch)
      }
      .toMat(Committer.sink(CommitterSettings(actorSystem)))(DrainingControl.apply)
      .run()

1 post - 1 participant

Read full topic

Jackson serialization of polymorphic types without rewriting trait

$
0
0

I am currently working on a research project using Akka Clustering. According to the documentation, Jackson is the preferred serializer, and I am particularly interested in the CBOR binary format due to heavy use of doubles in my work. I am, however, using a heavily polymorphic type pulled in from an external library, and the recommended way to allow for a polymorphic type defined in the Akka docs is to use the @JsonSubTypes and @JsonTypeInfo annotations to define the different types.

Due to large number of extended types from this trait, it would be very difficult to comprehensively define every one of them, and would require editing the source code of the library I’m using. Is there any other way to do so? The code is perfectly serialized and deserialized if sent through the standard Java serialization.

This example is similar to my issue, and is taken from the Akka docs. So my question is essentially: Would it be possible to serialize this case class without the ability to directly add annotations to the trait?

final case class Zoo(primaryAttraction: Animal) extends CborSerializable

sealed trait Animal

final case class Lion(name: String) extends Animal

final case class Elephant(name: String, age: Int) extends Animal

1 post - 1 participant

Read full topic

Viewing all 1359 articles
Browse latest View live