Quantcast
Viewing all 1359 articles
Browse latest View live

Akka Projection EventSourcedProvider event-adapters

Hello!

I have recently tried out Akka Projection’s EventSourcedProvider and stumbled into some problems with typing the underlying event in the EventEnvelope. E.g. as per the documentation it is SourceProvider[Offset, EventEnvelope[ShoppingCart.Event]].

I have been trying it out in combination with akka-persistence-jdbc and typing it like that doesn’t work if I configure an event adapter for my persistence query stream.

The problem is that the EventAdapter’s fromJournal method returns an akka.persistence.journal.EventSeq which in the end is the type that I receive in the journal’s eventsByTag query and not a ShoppingCart.Event.

So instead of having EventSourcedProvider.eventsByTag[ShoppingCart.Event](..) my stream is EventSourcedProvider.eventsByTag[EventSeq](...). I wonder if there is a better way to solve this or if I am overseeing something in general.

Additionally, if this actually seems to be a problem I wonder if it would make sense to allow the configuration of a typed “EventAdapter” per EventSourcedProvider, similar as to how you configure it on an akka typed event sourced behavior:

  EventSourcedBehavior
    .apply[?, ?, ?](...)
    .eventAdapter(eventAdapter)

If the issue is unclear, I can also provide a sample project on github.

Thanks a lot!

1 post - 1 participant

Read full topic


Kafka stateful processing via Cluster sharding

Hello,
Our team would need to develop stateful processing of Kafka messages, using “Cluster sharding” pattern.

From the below examples, it seems the sharding initialization part “KafkaClusterSharding.messageExtractorNoEnveloppe(…)” always have to be split from the consumer part “Consumer.source…shardRegion.ask[Done](replyTo => …)”:

on https://developer.lightbend.com/start/?group=akka&project=akka-samples-cluster-sharding-scala and (github) akka-samples/tree/2.6/akka-sample-kafka-to-sharding-scala

It would be nice knowing if:
1- There’s no easier way to define the kafka-consumer + cluster-sharding-actors at once, or if it is intended to be always split?
2- If stateful processing of akka-kafka messages is achievable another “classic” way?
3- If somebody could get the 2nd-example working ad-hoc, especially due to issue https://github.com/akka/akka-samples/issues/219? (which I also have)

Thanks a lot anyway for your work

1 post - 1 participant

Read full topic

Binary compatible verification in unit testing

Is there a way to capture akka http binary compatible issues with Spring boot integration test? In my experience it’s not occurred in integration tests whereas below exception observed during deployment time.

java.lang.IllegalStateException: You are using version 10.2.1 of Akka HTTP,
but it appears you (perhaps indirectly) also depend on older versions of
related artifacts. You can solve this by adding an explicit dependency
on version 10.2.1 of the [akka-http-spray-json] artifacts to your project.
See also:
https://doc.akka.io/docs/akka/current/common/binary-compatibility-rules.html#mixed-versioning-is-not-allowed

1 post - 1 participant

Read full topic

My Bug while Using Multiple Cluster sharding with Akka Persistence

Hello, dear Akka gurus.
I’m dealing with a bad bug that I hesitate if it is in my code or not.
Consider I have three types of Entities (Actors Image may be NSFW.
Clik here to view.
:slight_smile:
).

  • Type A: Akka’s Default ClusterSharding
  • Type B & C: KafkaClusterSharding

and all of them are persistent actors using the Cassandra persistence plugin. I see strange behavior in my tests:

  1. actor b from type B: spawn and respond to some messages
  2. perhaps actor b stops.
  3. actor b: recoveries with State of type A and could not replay the events and I face this error log (Disclaimer: I don’t use any EventAdapter):

[akka://myssys/system/sharding/A/1/c381b116-ba2a-403f-86d2-ef73d0c83e9a] - Initializing snapshot recovery: Recovery(SnapshotSelectionCriteria(9223372036854775807,9223372036854775807,0,0),9223372036854775807,9223372036854775807)
[akka://myssys/system/sharding/A/1/c381b116-ba2a-403f-86d2-ef73d0c83e9a] - Replaying events: from: 1, to: 9223372036854775807
[akka://myssys/system/sharding/A/1/c381b116-ba2a-403f-86d2-ef73d0c83e9a#111024197]]. Persistent Actors with the same PersistenceId should not run concurrently
[akka://myssys/system/sharding/A/1/c381b116-ba2a-403f-86d2-ef73d0c83e9a] - Returning recovery permit, reason: on replay failure: No match found for event [class B$B_Event_1] and state [B$State]. Has this event been stored using an EventAdapter? (of class java.lang.String)
[akka://myssys/system/sharding/A/1/c381b116-ba2a-403f-86d2-ef73d0c83e9a] - Recovery failure for persistenceId [PersistenceId(c381b116-ba2a-403f-86d2-ef73d0c83e9a)] after 59.05 ms
[akka://myssys/system/sharding/A/1/c381b116-ba2a-403f-86d2-ef73d0c83e9a] - Recovery failure for persistenceId [PersistenceId(c381b116-ba2a-403f-86d2-ef73d0c83e9a)] after 60.18 ms
[akka://myssys/system/sharding/A/1/c381b116-ba2a-403f-86d2-ef73d0c83e9a] - Supervisor RestartSupervisor saw failure: Exception during recovery. Last known sequence number [1]. PersistenceId [c381b116-ba2a-403f-86d2-ef73d0c83e9a], due to: Exception during recovery while handling [B$B_Event_1] with sequence number [1]. PersistenceId [c381b116-ba2a-403f-86d2-ef73d0c83e9a], due to: No match found for event [class B$B_Event_1] and state [A$State]. Has this event been stored using an EventAdapter? (of class java.lang.String)
akka.persistence.typed.internal.JournalFailureException: Exception during recovery. Last known sequence number [1]. PersistenceId [c381b116-ba2a-403f-86d2-ef73d0c83e9a], due to: Exception during recovery while handling [B$B_Event_1] with sequence number [1]. PersistenceId [c381b116-ba2a-403f-86d2-ef73d0c83e9a], due to: No match found for event [class B$B_Event_1] and state [A$State]. Has this event been stored using an EventAdapter? (of class java.lang.String)
at akka.persistence.typed.internal.ReplayingEvents.onRecoveryFailure(ReplayingEvents.scala:220)
at akka.persistence.typed.internal.ReplayingEvents.onJournalResponse(ReplayingEvents.scala:153)

What is my Problem? Please help me.

1 post - 1 participant

Read full topic

The relationship between actor and thread

What is the relationship between actor and thread? How does actor achieve high concurrency with thread ?

1 post - 1 participant

Read full topic

Akka 2.6.9 Artery TCP Implementation (Load balancer with local address)

Hi,

Below is my application.conf properties .
I have Load balancer IP “myproject-uat” on which need to start the listener which I have configured in canonical.hostname.
but as per below logs:
[ArteryTcpTransport (akka://sys)] Remoting started with transport [Artery tcp]; listening on address [akka://sys@localaddress:2551] with UID [372678812823] .
Listener is started on localaddress not the Virtual IP address.

Image may be NSFW.
Clik here to view.
image

Can anyone please check the above configuration for starting the listener on VIP address?

1 post - 1 participant

Read full topic

Leave a substream stopped without stopping sibling substreams for committablePartitionedSource

Hi guys. I’m using committablePartitionedSource consumer. By default when one substream stops then its sibling substreams are stopped as well. Can I somehow have the rest of substreams continue their processing after one has stopped?

1 post - 1 participant

Read full topic

Is it possible to scale consumer pods in Kubernetes using metrics from alpakka?

I have a set of micro-services such that I have one Orchestrator actor and one or more Worker actors. The orchestrator monitors Kafka topics and hands them out to workers to be consumers using alpakka. Each of these micro-services are their own Kubernetes pods.

My goal is to leverage something like the consumer lag metric, mentioned in the article below, to scale worker replicas as needed.

I have read elsewhere, though, that this metric may not be an accurate representation of things as alpakka clients may have cached messages that won’t show up in the lag metric. I have two questions from this:

  1. Is this a real concern and, if so, what’s a better metric to use?
  2. How do I get at these metrics via alpakka since I’m not using Akka Streams as the article documents? I don’t need to expose these such that K8s autoscales since I have an Orchestrator which could do the scaling so any, realtime access to one or more relevant metric is fine.

1 post - 1 participant

Read full topic


Multi node testing In akka

Can i write multi node testing with java? Because there is no documentation in lightbend docs. thx

3 posts - 2 participants

Read full topic

Akka - Binding address is failing

Hi,

Below is my application.conf properties .
I have two hostname “myproject-uat” which I have configured in canonical.hostname and other one is the local address.

Image may be NSFW.
Clik here to view.
image

Can anyone please check the above configuration for binding both address having different ip address but same port?

2 posts - 2 participants

Read full topic

Correct way to use withRequestTimeout directive with akka-grpc

I’m using akka-grpc 1.0.2 and trying to setup an endpoint timeout.

My route looks like this:

val handler = MyServiceHandler(...)
val rpcTimeout = ...
Route.asyncHandler {
  withRequestTimeout(rpcTimeout) { implicit rctx =>
    handler(rctx.request).map(RouteResult.Complete)
  }
}

The requests complete as expected, but I’m seeing this warning all over the logs:

2020-11-09 10:41:15.810 [myapp-akka.actor.default-dispatcher-5] WARN  akka.actor.ActorSystemImpl - withRequestTimeout was used in route however no request-timeout is set!

It looks like the withRequestTimeout method is printing this warning because it can’t find a Timeout-Access header. I’ve also tried injecting this header using mapRequest(_.withHeaders(new Timeout-Access(???))), but I’m not sure how to instantiate a proper Timeout-Access. The comments on the TimeoutAccess trait say “Not for user extension.” which makes it seem like this is probably the wrong way to mute that warning.

1 post - 1 participant

Read full topic

Binding event adapters of type Read and Write to the same event type

I’m wondering why is it not allowed to bind a read type and a write type event adapter to the same event type, like this:

"akka.persistence.journal.ReadMeTwiceEvent" = [reader, writer]

… but I do can bind two read type event adapters:

"akka.persistence.journal.ReadMeTwiceEvent" = [reader, another-reader]

2 posts - 2 participants

Read full topic

Persistent Actor Discovery

Hello,

I am using AKKA Persistence ob its own without clustering of sharding. I have a simple HTTP route with a POST and a GET. In the POST I create the persistent actor instance. I need to access the same actor in the GET to retrieve the state. I also want it accessible after a server restart and recovery. What is the best way to lookup the persistent actor? I tried sending a receptionist subscription from the persistence actor after successful creation and recovery. However, if I try to find the actor I am getting an empty listing. Is there another API to access an instance of a persistent actor?

The code for the persistent actor is shown below.

object PersistentCompanyActor {

  sealed trait Event extends CborSerializable;
  case class CreatedEvent(company: Company) extends Event
  case class UpdatedEvent(company: Company) extends Event

  sealed trait Command extends CborSerializable;
  case class CreateCommand(company: Company, replyTo: ActorRef[Result]) extends Command
  case class UpdateCommand(company: Company, replyTo: ActorRef[Result]) extends Command
  case class GetCommand(replyTo: ActorRef[Company]) extends Command

  case class Result(success: Boolean, error: Option[String]) {
    def this() = this(true, None)
    def this(error: String) = this(false, Some(error))
  }

  def apply(companyId: String): Behavior[Command] = Behaviors.setup[PersistentCompanyActor.Command] { ctx =>
    EventSourcedBehavior[Command, Event, Option[Company]](
      persistenceId = PersistenceId.ofUniqueId(companyId),
      emptyState = None,
      commandHandler = commandHandler(ctx, _, _),
      eventHandler = eventHandler
    ).receiveSignal {
      case (state, RecoveryCompleted) => notifyRegistry(ctx, companyId)
    }
  }

  def commandHandler(ctx: ActorContext[_], oldState: Option[Company], cmd: Command): Effect[Event, Option[Company]] = cmd match {
    case CreateCommand(newState, replyTo) =>
      persist(UpdatedEvent(newState)).
        thenRun((x: Option[Company]) => notifyRegistry(ctx, newState.companyId.get)).
        thenReply(replyTo)(state => new Result())
    case UpdateCommand(newState, replyTo) =>
      persist(UpdatedEvent(newState)).
        thenReply(replyTo)(state => new Result())
    case GetCommand(replyTo) => Effect.none.thenReply(replyTo)(state => state.get)
  }

  def eventHandler(oldState: Option[Company], evt: Event): Option[Company] = {
    println("Event handler being run " + evt)
    evt match {
      case CreatedEvent(newState) => Some(newState)
      case UpdatedEvent(newState) => Some(newState)
    }
  }

  private def notifyRegistry(ctx: ActorContext[_], companyId: String) = {
    ctx.system.receptionist ! Receptionist.Register(ServiceKey("Company-" + companyId), ctx.self)
    ctx.log.info("Subscription message sent " + companyId)
  }

}

And this is the code where I create and try to find the actor. I am hoping there is a better way to to do this, as the code below looks a bit convoluted for looking up an actor.

sealed trait CompanyRegistryActorCommand
case class CreateRegistry(company: Company, replyTo: ActorRef[Result]) extends CompanyRegistryActorCommand
case class UpdateRegistry(companyId: String, company: Company, replyTo: ActorRef[Result]) extends CompanyRegistryActorCommand
case class UpdateRegistry2(company: Company, companyActor: ActorRef[PersistentCompanyActor.Command], replyTo: ActorRef[Result]) extends CompanyRegistryActorCommand
case class GetRegistry(companyId: String, replyTo: ActorRef[Company]) extends CompanyRegistryActorCommand
case class GetRegistry2(companyActor: ActorRef[PersistentCompanyActor.Command], replyTo: ActorRef[Company]) extends CompanyRegistryActorCommand
case class WrappeduserCreatedResponse(ucr: UserCreationResponse) extends CompanyRegistryActorCommand
case class CompanyActorRegistration(id: String, actor: ActorRef[PersistentCompanyActor.Command]) extends CompanyRegistryActorCommand

object CompanyRegistryActor {

  var companyActors = scala.collection.mutable.Map.empty[String, ActorRef[PersistentCompanyActor.Command]]
  val Key: ServiceKey[CompanyRegistryActorCommand] = ServiceKey("CompanyRegistry")

  def apply(fbs: FirebaseService): Behavior[CompanyRegistryActorCommand] = Behaviors.setup[CompanyRegistryActorCommand] { ctx =>

    val fbsResponseHandler = ctx.messageAdapter[UserCreationResponse](response => WrappeduserCreatedResponse(response))
    ctx.system.receptionist ! Receptionist.Register(Key, ctx.self)

    Behaviors.receiveMessage {
      case CreateRegistry(company, replyTo) =>
        ctx.log.info("Create message received by company registry")
        val companyId = Some(UUID.randomUUID().toString)
        val firebaseActor = ctx.spawn(FirebaseActor(fbs), "FirebaseActor")
        firebaseActor ! CreateUserCommand(company.copy(companyId = companyId), replyTo, fbsResponseHandler)
        Behaviors.same
      case UpdateRegistry(companyId, company, replyTo) =>
        ctx.log.info("Update message received by company registry")
        implicit val timeout: Timeout = 1.second
        val key: ServiceKey[PersistentCompanyActor.Command] = ServiceKey("Company-" + companyId)
        ctx.ask(ctx.system.receptionist, Find(key)){
          case Success(listing: Listing) =>
            ctx.log.info("Company actor looked up " + companyId)
            val companyActor = listing.serviceInstances[PersistentCompanyActor.Command](key).head
            UpdateRegistry2(company, companyActor, replyTo)
        }
        Behaviors.same
      case UpdateRegistry2(company, compnayActor, replyTo) =>
        compnayActor ! UpdateCommand(company, replyTo)
        Behaviors.same
      case GetRegistry(companyId, replyTo) =>
        ctx.log.info("Get message received by company registry")
        implicit val timeout: Timeout = 1.second
        val key: ServiceKey[PersistentCompanyActor.Command] = ServiceKey("Company-" + companyId)
        ctx.ask(ctx.system.receptionist, Find(key)){
          case Success(listing: Listing) =>
            ctx.log.info("Company actor looked up " + companyId)
            val companyActor = listing.serviceInstances[PersistentCompanyActor.Command](key).head
            GetRegistry2(companyActor, replyTo)
        }
        Behaviors.same
      case GetRegistry2(compnayActor, replyTo) =>
        ctx.log.info("Get message received by company registry")
        compnayActor ! GetCommand(replyTo)
        Behaviors.same
      case WrappeduserCreatedResponse(response) =>
        ctx.log.info("Firebase response received by registry")
        if (response.success) {
          val companyActor = ctx.spawn(PersistentCompanyActor(response.company.companyId.get), "CompanyActor-" + response.company.companyId.get)
          companyActors += (response.company.companyId.get -> companyActor)
          companyActor ! CreateCommand(response.company, response.originator)
        } else {
          response.originator ! Result(response.success, response.error)
        }
        Behaviors.same
      case CompanyActorRegistration(id, actor) =>
        companyActors += (id -> actor)
        Behaviors.same
    }
  }

}

Many thanks

10 posts - 2 participants

Read full topic

Persistence of Initial State

Hello,

When a persistent actor is created with an initial state and it doesn’t receive any commands before the server is shutdown, where is the initial state stored and how is it recovered when the server is restarted? I am unable to see any entry in the snapshot or journal tables.

Kind regards

2 posts - 2 participants

Read full topic

Akka 2.6.9- InboundHandshake exception

Implemented Remoting Artery TCP in Java Project in which getting below exception on the destination module.

application.conf have below configuration
artery
{
enabled=on
transport=tcp
canonical.hostname=“host-uat”
canonical.port=2551
bind.hostname=“localaddress”
bind.port=2551
}
Can anyone please explain the exception I am getting in above screenshot.

6 posts - 2 participants

Read full topic


Spray-json 1.3.6 released

We released a small maintenance release for spray-json, version 1.3.6.

The changes:

  • Preserve order of iterable in viaSeq in Scala 2.13 (#330)
  • Throw instead of overflowing silently when numeric values are out of range for the target type (#208)
  • Convert Float to JsNumber directly without going through Double (#241)
  • Build with latest Scala versions (#334)

Thanks to all the contributors!

Happy hakking,
Johannes

1 post - 1 participant

Read full topic

Custom transport with Akka Artery TCP(version-2.6.9)

Eliminate timeouts on admin API's

Hello,

We are using Akka-HTTP timeouts within our systems, such as request-timeout and idle-timeout.
We have some admin pages (for internal debug purposes) where we want to disable all timeouts that are configured in production.

so I have a few questions:

  1. For those admin pages we are using withoutRequestTimeout, does it also disable the idle-timeout? if it affects request-timeout only, is there something else we can use to eliminate the idle-timeout as well?
  2. Is there a way to have two separate paths, one for admin and the other for production, with different timeout configurations?

Thank you!

2 posts - 2 participants

Read full topic

Akka Typed persistence at-least-once delivery

I’m just trying to playing out in my head how to do persisted at-least-once delivery in Akka Typed.

I can persist the recipient info when receiving a command to send a message, and send the message. Then I can persist if that message has been acknowledged (received) successfully, and I can also schedule retries. But in the scenario where the system crashes before an answer has been received, how do I make sure that the message is sent again after the actor is restored? Do I have to send a message to the actor’s self after restore to check if there is an unanswered message, or is there another idiomatic way?

1 post - 1 participant

Read full topic

Interrupting an Akka thread under certain conditions to avoid memory consumption

Hope this question does make sense but been recently playing with Akka and bugging myself with something that seems trivial at first but may conflict the whole system.

What I’m trying to solve is that whenever an actor makes a call to another actor with a given timeout I’m 100% sure that if the timeout is triggered the response that is being computed underneath and no longer to be consumed won’t fit in memory and crash the whole system.

Would it be safe to, in case of a timeout, notify the underneath actor thread to stop that computation and all the subsequent calls? I will put some pseudocode to illustrate better.

        Patterns.ask(...)
            .onComplete(new OnComplete<Object> {
            @Override
            public void onComplete(Throwable failure, Object success) throws Throwable {
                // Interrupt Future<Object> (Patterns.ask() thread) in here?
            }
        }, ...);

1 post - 1 participant

Read full topic

Viewing all 1359 articles
Browse latest View live