Quantcast
Channel: Akka Libraries - Discussion Forum for Akka technologies
Viewing all 1359 articles
Browse latest View live

Akka cluster using Cassandra driver and running on EKS, with Keyspaces for Cassandra, generates weird warning

$
0
0

This WARN happens after an action is performed by the Cassandra driver after persisting data.

[2020-09-10 07:12:00,123] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-1] [] - [s0|/3.25.37.71:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0x6416a3c0, L:/10.62.160.46:38170 - R:3.25.37.71/3.25.37.71:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:00,128] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-0] [] - [s0|/3.25.37.70:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0xfc77d13f, L:/10.62.160.46:55334 - R:3.25.37.70/3.25.37.70:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:00,159] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-0] [] - [s0|/3.25.37.127:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0xaed46d98, L:/10.62.160.46:41326 - R:3.25.37.127/3.25.37.127:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:00,203] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-1] [] - [s0|/3.25.37.66:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0xf8f9a736, L:/10.62.160.46:32856 - R:3.25.37.66/3.25.37.66:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:00,387] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-1] [] - [s0|/3.25.37.126:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0x30782974, L:/10.62.160.46:53796 - R:3.25.37.126/3.25.37.126:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:00,459] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-0] [] - [s0|/3.25.37.125:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0x740c3446, L:/10.62.160.46:44512 - R:3.25.37.125/3.25.37.125:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:00,477] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-0] [] - [s0|/3.25.37.65:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0xd6a32bed, L:/10.62.160.46:58432 - R:3.25.37.65/3.25.37.65:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:00,515] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-1] [] - [s0|/3.25.37.121:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0xbfff5e3c, L:/10.62.160.46:51852 - R:3.25.37.121/3.25.37.121:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:03,776] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-1] [] - [s0|/3.25.37.71:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0xd5751f2b, L:/10.62.160.46:38222 - R:3.25.37.71/3.25.37.71:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:04,002] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-1] [] - [s0|/3.25.37.126:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0x31832a99, L:/10.62.160.46:53842 - R:3.25.37.126/3.25.37.126:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:04,112] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-0] [] - [s0|/3.25.37.125:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0x593bdfca, L:/10.62.160.46:44558 - R:3.25.37.125/3.25.37.125:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:04,214] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-1] [] - [s0|/3.25.37.66:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0x74181e17, L:/10.62.160.46:32908 - R:3.25.37.66/3.25.37.66:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:04,299] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-0] [] - [s0|/3.25.37.70:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0xd940b5f6, L:/10.62.160.46:55396 - R:3.25.37.70/3.25.37.70:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:04,386] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-0] [] - [s0|/3.25.37.127:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0x0c030eca, L:/10.62.160.46:41386 - R:3.25.37.127/3.25.37.127:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:04,608] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-1] [] - [s0|/3.25.37.121:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0xc87a74f8, L:/10.62.160.46:51902 - R:3.25.37.121/3.25.37.121:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))
[2020-09-10 07:12:04,657] [WARN] [] [com.datastax.oss.driver.internal.core.pool.ChannelPool] [s0-admin-0] [] - [s0|/3.25.37.65:9142]  Error while opening new channel (ConnectionInitException: [s0|id: 0x7fa12653, L:/10.62.160.46:58486 - R:3.25.37.65/3.25.37.65:9142] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.4.0, CLIENT_ID=e5c3c8f8-553e-4ffb-a753-f33678d6d15c}): failed to send request (javax.net.ssl.SSLException: SSLEngine closed already))

The data is still being persisted tough.

Also, used the suggested configuration by AWS : https://docs.aws.amazon.com/keyspaces/latest/devguide/programmatic.credentials.html#programmatic.credentials.SigV4_MCS

datastax-java-driver {

    basic.contact-points = [${?CASSANDRA_DNS_AND_PORT}]

    advanced.auth-provider{
        class = software.aws.mcs.auth.SigV4AuthProvider
        aws-region = ap-southeast-2
    }

    basic.load-balancing-policy {
        local-datacenter = "ap-southeast-2"
    }

    advanced.ssl-engine-factory {
        class = DefaultSslEngineFactory
        truststore-path = "/cassandra_truststore.jks"
        truststore-password = ${?CASSANDRA_TRUSTSTORE_PASSWORD}
    }
}

datastax-java-driver.profiles {
  akka-persistence-cassandra-profile {
    basic.request.consistency = LOCAL_QUORUM
  }
}

1 post - 1 participant

Read full topic


Why does Akka recommend having 10x as many shards as nodes?

$
0
0

I was trying out an experiment with Persistent actors on a cluster, backed by Cassandra, with a million actors. I had 3 nodes with 9 shards, and the throughput was lower than what I was expecting. Until, I found this line in the documentation:

As a rule of thumb, the number of shards should be a factor ten greater than the planned maximum number of cluster nodes. It doesn’t have to be exact

After I changed the number of shards to 30, the throughput improved significantly (almost 5 times).

My question is, what is happening under the hood that caused this, and where does this 10x guideline come from?

1 post - 1 participant

Read full topic

Akka Cassandra projection not working when used along with kafka sharded cluster

$
0
0

Hey guys,

I have a project which utilises Akka Sharded Cluster with Kafka, with persistent Actors on Cassandra, which works fine. This application’s goal is to consume messages from kafka then dispatch to the sharded actors for processing, save the state for resilience, perform the business process, then use Akka projections for delivery guarantee on another Kafka topic for downstream processing.

Here is my Kafka Sharded Cluster setup :

public TradeKafkaProcessorService(final ActorSystem<?> system, final String kafkaBootstrap) {

        sharding = ClusterSharding.get(system);
        objectMapper = JacksonObjectMapperProvider.get(Adapter.toClassic(system))
                .getOrCreate("jackson-json", Optional.empty());

        final String securityProtocol = ConfigFactory.load().getString(SECURITY_PROTOCOL_KEY);
        final String sslProtocol = ConfigFactory.load().getString(SSL_PROTOCOL_KEY);

        CompletionStage<KafkaClusterSharding.KafkaShardingNoEnvelopeExtractor<Trade.Command>> messageExtractor =
                KafkaClusterSharding.get(system)
                        .messageExtractorNoEnvelope(
                                REGISTER_TRADE_TOPIC,
                                Duration.ofSeconds(10),
                                (Trade.Command msg) -> msg.toString(),
                                ConsumerSettings.create(
                                        Adapter.toClassic(system), new StringDeserializer(), new StringDeserializer())
                                        .withBootstrapServers(kafkaBootstrap)
                                        .withProperty(SECURITY_PROTOCOL, securityProtocol)
                                        .withProperty(SSL_PROTOCOL, sslProtocol)
                                        .withGroupId(
                                                ENTITY_TYPE_KEY
                                                        .name()));

        messageExtractor.thenAccept(
                extractor ->
                        ClusterSharding.get(system)
                                .init(
                                        Entity.of(
                                                ENTITY_TYPE_KEY,
                                                entityContext ->
                                                        Trade.create(
                                                                entityContext.getEntityId(),
                                                                PersistenceId.of(
                                                                        entityContext.getEntityTypeKey().name(), entityContext.getEntityId())))
                                                .withAllocationStrategy(
                                                        new ExternalShardAllocationStrategy(
                                                                system, ENTITY_TYPE_KEY.name(), Timeout.create(Duration.ofSeconds(5))))
                                                .withMessageExtractor(extractor)));

        ActorRef<ConsumerRebalanceEvent> rebalanceListener =
                KafkaClusterSharding.get(system).rebalanceListener(ENTITY_TYPE_KEY);

        ConsumerSettings<String, byte[]> consumerSettings =
                ConsumerSettings.create(
                        Adapter.toClassic(system), new StringDeserializer(), new ByteArrayDeserializer())
                        .withBootstrapServers(kafkaBootstrap)
                        .withGroupId(ENTITY_TYPE_KEY.name());

        // pass the rebalance listener to the topic subscription
        AutoSubscription subscription =
                Subscriptions.topics(REGISTER_TRADE_TOPIC)
                        .withRebalanceListener(Adapter.toClassic(rebalanceListener));

        Consumer.plainSource(consumerSettings, subscription)
                .map(e -> {

                            final String key = e.key();
                            final String value = new String(e.value());

                            TradeCaptureReport TradeCaptureReport = objectMapper.readValue(value, TradeCaptureReport.class);
                            handleMessage(key, TradeCaptureReport);

                            return TradeCaptureReport;

                        }
                )
                .runWith(Sink.ignore(), system);


    }

The application works fine, as seen by the logs :

27.0.0.1:2553/system/cluster/core/daemon#-1697575068]] to [akka://Trade@127.0.0.1:53617]
[2020-09-16 11:50:44,907] [INFO] [akka://Trade@127.0.0.1:2553] [akka.cluster.Cluster] [Trade-akka.actor.default-dispatcher-36] [Cluster(akka://Trade)] - Cluster Node [akka://Trade@127.0.0.1:2553] - Node [akka://Trade@127.0.0.1:53617] is JOINING, roles [dc-default]
[2020-09-16 11:50:44,909] [INFO] [akka://Trade@127.0.0.1:53617] [akka.cluster.Cluster] [Trade-akka.actor.default-dispatcher-3] [Cluster(akka://Trade)] - Cluster Node [akka://Trade@127.0.0.1:53617] - Welcome from [akka://Trade@127.0.0.1:2553]
[2020-09-16 11:50:46,176] [INFO] [akka://Trade@127.0.0.1:2553] [akka.cluster.Cluster] [Trade-akka.actor.default-dispatcher-34] [Cluster(akka://Trade)] - Cluster Node [akka://Trade@127.0.0.1:2553] - Leader is moving node [akka://Trade@127.0.0.1:2554] to [Up]
[2020-09-16 11:50:46,177] [INFO] [akka://Trade@127.0.0.1:2553] [akka.cluster.Cluster] [Trade-akka.actor.default-dispatcher-34] [Cluster(akka://Trade)] - Cluster Node [akka://Trade@127.0.0.1:2553] - Leader is moving node [akka://Trade@127.0.0.1:53617] to [Up]
[2020-09-16 11:50:46,690] [INFO] [akka://Trade@127.0.0.1:2554] [akka.cluster.singleton.ClusterSingletonManager] [Trade-akka.actor.default-dispatcher-16] [akka://Trade@127.0.0.1:2554/system/sharding/register-trade-topic-group-idCoordinator] - ClusterSingletonManager state change [Start -> Younger]
[2020-09-16 11:50:46,809] [INFO] [akka://Trade@127.0.0.1:53617] [akka.cluster.singleton.ClusterSingletonManager] [Trade-akka.actor.default-dispatcher-3] [akka://Trade@127.0.0.1:53617/system/sharding/register-trade-topic-group-idCoordinator] - ClusterSingletonManager state change [Start -> Younger]
[2020-09-16 11:51:08,785] [INFO] [akka://Trade@127.0.0.1:2554] [systems.clearpay.trade.Trade] [Trade-akka.actor.default-dispatcher-56] [akka://Trade/system/sharding/register-trade-topic-group-id/0/1] - Starting Trade 1
[2020-09-16 11:51:08,826] [DEBUG] [akka://Trade@127.0.0.1:2554] [systems.clearpay.trade.Trade] [Trade-akka.actor.default-dispatcher-56] [akka://Trade/system/sharding/register-trade-topic-group-id/0/1] - Initializing snapshot recovery: Recovery(SnapshotSelectionCriteria(9223372036854775807,9223372036854775807,0,0),9223372036854775807,9223372036854775807)
[2020-09-16 11:51:08,848] [DEBUG] [akka://Trade@127.0.0.1:2554] [systems.clearpay.trade.Trade] [Trade-akka.actor.default-dispatcher-56] [akka://Trade/system/sharding/register-trade-topic-group-id/0/1] - Snapshot recovered from 0 Map() VersionVector()
[2020-09-16 11:51:08,853] [DEBUG] [akka://Trade@127.0.0.1:2554] [systems.clearpay.trade.Trade] [Trade-akka.actor.default-dispatcher-56] [akka://Trade/system/sharding/register-trade-topic-group-id/0/1] - Replaying events: from: 1, to: 9223372036854775807
[2020-09-16 11:51:09,066] [DEBUG] [akka://Trade@127.0.0.1:2554] [systems.clearpay.trade.Trade] [Trade-akka.actor.default-dispatcher-56] [akka://Trade/system/sharding/register-trade-topic-group-id/0/1] - Recovery successful, recovered until sequenceNr: [9]
[2020-09-16 11:51:09,066] [DEBUG] [akka://Trade@127.0.0.1:2554] [systems.clearpay.trade.Trade] [Trade-akka.actor.default-dispatcher-56] [akka://Trade/system/sharding/register-trade-topic-group-id/0/1] - Returning recovery permit, reason: replay completed successfully
[2020-09-16 11:51:09,068] [DEBUG] [akka://Trade@127.0.0.1:2554] [systems.clearpay.trade.Trade] [Trade-akka.actor.default-dispatcher-56] [akka://Trade/system/sharding/register-trade-topic-group-id/0/1] - Recovery for persistenceId [PersistenceId(register-trade-topic-group-id|1)] took 217.4 ms
[2020-09-16 11:51:09,092] [INFO] [akka://Trade@127.0.0.1:2554] [systems.clearpay.trade.Trade] [Trade-akka.actor.default-dispatcher-56] [akka://Trade/system/sharding/register-trade-topic-group-id/0/1] - State {trades=9, latest state=REGISTERED}
[2020-09-16 11:51:09,097] [DEBUG] [akka://Trade@127.0.0.1:2554] [systems.clearpay.trade.Trade] [Trade-akka.actor.default-dispatcher-56] [akka://Trade/system/sharding/register-trade-topic-group-id/0/1] - Handled command [systems.clearpay.trade.Trade$Register], resulting effect: [Persist(systems.clearpay.trade.Trade$Registered@5d97b5c5)], side effects: [1]
[2020-09-16 11:51:09,123] [DEBUG] [akka://Trade@127.0.0.1:2554] [systems.clearpay.trade.Trade] [Trade-akka.actor.default-dispatcher-16] [akka://Trade/system/sharding/register-trade-topic-group-id/0/1] - Received Journal response: WriteMessagesSuccessful after: 22308415 nanos
[2020-09-16 11:51:09,126] [DEBUG] [akka://Trade@127.0.0.1:2554] [systems.clearpay.trade.Trade] [Trade-akka.actor.default-dispatcher-56] [akka://Trade/system/sharding/register-trade-topic-group-id/0/1] - Received Journal response: WriteMessageSuccess(PersistentRepr(register-trade-topic-group-id|1,10,ee21dc38-2229-4ce4-97f7-ece6635d615c,0,None),1) after: 24871053 nanos

But, the projection does not work at all. In fact, I get no errors, but it does not do what it was supposed to be doing. Which is save the projection on Cassandra and then execute the publisher which is a Kafka producer. Here is my setup :

public static Behavior<Void> create(final String kafkaBootstrap, final Mode mode) {
        return Behaviors.setup(context -> {

            ActorSystem<Void> system = context.getSystem();

            //---> SEND TO KAFKA PROJECTION
            ProducerSettings<String, String> producerSettings =
                    ProducerSettings.create(context.getSystem(), new StringSerializer(), new StringSerializer())
                            .withBootstrapServers(kafkaBootstrap);
            // FIXME classicSystem might not be needed in later Alpakka Kafka version?
            SendProducer<String, String> sendProducer =
                    new SendProducer<>(producerSettings, system.classicSystem());


            // #sendToKafkaProjection
            SourceProvider<Offset, EventEnvelope<WordEnvelope>> sourceProvider =
                    EventSourcedProvider.eventsByTag(system, CassandraReadJournal.Identifier(), "trades-1");

            // #atLeastOnce
            Projection<EventEnvelope<WordEnvelope>> projection =
                    CassandraProjection.atLeastOnce(
                            ProjectionId.of("all-trades", "trades-1"),
                            sourceProvider,
                            () -> new WordPublisher(EVENT_SOURCING_TOPIC, sendProducer));

            context.spawn(ProjectionBehavior.create(projection), projection.projectionId().id());

            new TradeKafkaProcessorService(system, kafkaBootstrap);
            ClusterBootstrap.get(Adapter.toClassic(system)).start();

            if (mode.equals(Mode.EKS)) {
                AkkaManagement.get(Adapter.toClassic(system)).start();
            }

            return Behaviors.empty();
        });
    }

I followed these docs :



Also the demo project :

Any ideas on this folks?

1 post - 1 participant

Read full topic

Problem with Wildfly and Akka 2.6 related to flight recorder (jdk.jfr.Event)

$
0
0

Just wanted to let you know the following because I could not really find a solution to this on the internet:

I’ve just upgraded my project from Akka 2.5 to 2.6.9 We currently use Wildfly together with Akka (well, I know this is probably not the greatest idea, but we currently have to do it).

When I first started Wildfly I got the following error:

15:01:16,898 WARN [org.jb.m.define ] (EE-ManagedExecutorService-default-Thread-1) corrID= caller= user= Failed to define class akka.remote.artery.jfr.TransportStarted in Module "deployment.mym.war" from Service Module Loader: java.lang.NoClassDefFoundError: Failed to link akka/remote/artery/jfr/TransportStarted (Module "deployment.mym.war" from Service Module Loader): jdk/jfr/Event

Somehow due to JBoss/Wildfly classloading jdk.jfr.Event doesn’t seem to be accessible for Akka. I played around with jboss-deployment-structure.xml etc. but could not solve the problem so far. However I could fix it by adding the following to my application.conf:

akka {
  java-flight-recorder {
    enabled = false
  }
}

For me it’s ok currently but I would still be interested if anyone figured out how to solve this without disabling the flight-recorder.

1 post - 1 participant

Read full topic

Issue while uploading compressed file to S3 using Alpakka

$
0
0

I tried to upload contents of a compressed tar file into S3 using Alpakka but only 1-2 entries were copied, rest of them were skipped.
When I increased the chunck size to a big number (double of file size in bytes), it worked but I suspect if the tar file size is too big then it will fail. Is it expected or I have missed something?
Below is my code:

lazy val fileUploadRoutes: Route = {
withoutRequestTimeout
withoutSizeLimit {
  pathPrefix("files") {
    post {
      path("uploads") {
        extractMaterializer { implicit materializer =>
          fileUpload("file") {
            case (metadata, byteSource) =>
              val uploadFuture = byteSource.async
                .via(Compression.gunzip(200000000))
                .via(Archive.tarReader()).async
                .runForeach(f => {

                  f._2.runWith(s3AlpakkaService.sink(FileInfo(UUID.randomUUID().toString, f._1.filePath, metadata.getContentType)))

                })
              onComplete(uploadFuture) {
                case Success(result) =>
                  log.info("Uploaded file to: " + result)
                  complete(StatusCodes.OK)
                case Failure(ex) =>
                  log.error(ex, "Error uploading file")
                  complete(StatusCodes.FailedDependency, ex.getMessage)
              }
          }
        }
      }
    }
  }
}

}

1 post - 1 participant

Read full topic

Alpakka 2.0.2 is now available!

$
0
0

Dear Hakkers,

The Alpakka contributors are happy to announce Alpakka 2.0.2.

Alpakka is compatible with Akka 2.5.31+ and Akka 2.6.8+. All modules are published for Scala 2.12 and Scala 2.13, and most are available for Scala 2.11.

In Alpakka 2.0.2, most libraries Alpakka uses are updated to their latest patch versions.

The way forward

Many libraries Alpakka uses have released new minor and even major versions. We’ll most likely open the main branch for things that may break binary compatibility and may upgrade libraries very soon so the community can push things forward for Alpakka 3.0. Alpakka 3.0 will not support Scala 2.11 anymore.

New Connector: Google BigQuery

See how it is used in the documentation.

Slick/JDBC

The Java DSL can now use PreparedStatement to set values in SQL safely.

  • Slick/JDBC: Support PreparedStatement use in Java DSL #2318 by @ihostage

Kinesis KCL

AWS S3

  • AWS S3: Add access-style property (to support path-style access for non-AWS S3 services) #2392 by @laszlovandenhoek

Release notes

The full release notes are in the documentation.

Akka by Lightbend

The Akka core team is employed by Lightbend. If you’re looking to take your Akka systems to the next level, let’s set up a time to discuss our enterprise-grade expert support, self-paced education courses, and technology enhancements that help you manage, monitor and secure your Akka systems - from development to production.

Happy hakking,
Your Alpakkas

1 post - 1 participant

Read full topic

Akka projections on events that are not tagged

$
0
0

Hello

I want to start using akka projection in my project. I am using akka cluster and I’d like to use the ShardedDaemonProcess to distribute the load. Currently I have a database with ~million events that have not been tagged and as I understand it I need to tag my events in order to use the library.

Is there any way to process events that have not been tagged?

6 posts - 2 participants

Read full topic

Akka HTTP Client Pool getting Request Timeout 408 after upgrade to 10.2.0

$
0
0

Dear hakkers,

we are building a routing system wich integrates a lot of different rest services.
We were using akka 2.5 , currtently we moved to akka 2.6.9 and Akka Http 10.2
and suddenly we receive randomly Request Timeout status 408 exceptions, which we haddend before. Now i try to find out how to prevent or to understand those exceptions.

Thank you in advance

1 post - 1 participant

Read full topic


Not receiving terminated event from remote Actor

$
0
0

This is on Akka 2.6.8:

Our clustered Akka deployment has become quite unreliable and it seems that we are not receiving all terminated events from remote actors. Today I have found a pretty clear case in our log files.

  1. Node A (100.64.4.38:2551) gets removed from the cluster
  2. Node B gets notified about this:
    INFO akka.remote.artery.Association - Association to [akka://ClusterSystem@100.64.4.38:2551] having UID [2426136702514312762] has been stopped. All messages to this UID will be delivered to dead letters. Reason: ActorSystem terminated
  3. Node B starts watching an Actor on node A, but never receives a termination event. Therefore it assumes the actor is still there. Since I retry sending the message until I either get an acknowledge or a termination event, the functionality is broken.

Is there a bug in AKKA? Or is my understanding of AKKA incorrect that I would ALWAYS get a termination event, even if I start watching after the Actor has died or the node the actor was running on was removed from the cluster?

4 posts - 2 participants

Read full topic

Akka cluster sharding base on traffic of shards

$
0
0

Akka cluster sharding-coordinator uses LeastShardAllocationStrategy to decide. but the problem is some times all memebers not equal , i mean some members will have more traffic than others , so if akka distribute them equally through the nodes , some nodes will be under more traffics .

I found that we could implement ShardAllocationStrategy manually, but it only keep track of the shardRegion actorRefs and shardIds .

My question is how we could have more meaningful state (e.g. weight of each shardId) there to decide base on weight of shards?

1 post - 1 participant

Read full topic

Websocket server doesn't work anymore in new (10.1.2 -> 10.2.0) version and can't downgrade

$
0
0

Hello - I tried today to add new features to a messaging server using Websocket I worked on a few weeks ago. At the time I was using Akka HTTP 10.1.2. The language to bind to an HTTP connection was

val binding = Http().bindAndHandle(route, "localhost", 8080)

taking in an implicit untyped ActorSystem

Since the change in versions, it’s now become

val binding = Http(system).newServerAt("localhost", 8080).bind(route)

Where system is a reference to a typed ActorSystem (I have two, I need one to spawn unmonitored actors).

This is all fine and good, but when I run it with the same route, my websocket server doesn’t connect. When I use websocat to smoke test, I get

[INFO  websocat::ws_client_peer] get_ws_client_peer
websocat: WebSocketError: I/O failure
websocat: error running

I did some debugging, tried to make a simpler Websocket route, and it seems it doesn’t even invoke my websocket route. So I said “whatever, I will finish my feature, and file a ticket to upgrade Akka HTTP versions later”. Turns out I can’t do that, because https://repo1.maven.org/maven2/com/typesafe/akka/akka-http-core_2.13/ doesn’t contain the Akka HTTP version I used to build this a few weeks ago.

This really sucks, because now instead of developing I am stuck trying to figure out why something that worked fine a few weeks ago doesn’t work in a new version which is not even a major version, and I don’t even have the option to just not upgrade. The learning curve on Scala and Akka is steep enough without this.

2 posts - 1 participant

Read full topic

LocalActorRef memory leak and Full GC Allocation Failure causing gc to take too long

$
0
0

Hi

We use akka cluster with persistence version 2.6.6_2.13 and our application runs on raspberry pi 3’s (compute module).

When having -Xmx set to 128m for the jvm, it takes about 26 hours until the heap has almost no space left. There is no OutOfMemoryError but instead the max GC pause becomes 2 sec 339 ms and Full GC happens every minute (analyzed using GCeasy).
This is not acceptable because we have setup timeouts in actors of 1 second which means they always time out if the application is paused for more than 1 second.
Goal: The max GC pause should not exceed 700 millis

Increasing the heap size from 128m to 256m makes it even worse, as expected (max GC pause then is 16 sec 48 ms !).

Heapdump analysis with eclipse’s memory analyzer tool “mat“ suspects a memory problem with an instance of the class akka.actor.LocalActorRef.
Somehow, there is one of our actors in the system, called “propertyHostChannel”, which holds an ActorRef instance which occupies 42.22% of the heap (this is getting worse the longer the application runs). It is a LocalActorRef.

Additionally, we can see per-minute bursts of ~400 (!) debug logs like:

[DEBUG] [a.a.L.Deserialization] [akka.actor.LocalActorRefProvider.Deserialization] [SELServer-akka.actor.default-dispatcher-28] - Resolve (deserialization) of path [user/SELServe
r/PropertyHostChannelRouter/VirtualPropertyHost/$Q7#65017950] doesn't match an active actor. It has probably been stopped, using deadLetters.

Eventually relevant code excerpt of the ChannelRouterActor (the actor behind the “propertyHostChannel” instance):

private PSet<ActorSelection> clusterRoutersOfSameType = HashTreePSet.empty();

@Override
public Receive createReceive() {
        return receiveBuilder()
                .match(AddIdWithPropsToRegistry.class, this::onAddIdWithPropsToRegistry)
                .match(RemoveIdWithPropsFromRegistry.class, this::onRemoveIdWithPropsFromRegistry)
                .match(SendTo.class, this::onSendTo)
                .match(SendToCluster.class, this::onSendToCluster)
                // cluster events
                .match(ClusterEvent.CurrentClusterState.class, this::onCurrentClusterState)
                .match(ClusterEvent.MemberUp.class, mUp ->
                        addToClusterRouters(mUp.member())
                )
                .match(ClusterEvent.ReachableMember.class, reachableMember ->
                        addToClusterRouters(reachableMember.member())
                )
                .match(ClusterEvent.UnreachableMember.class, unreachableMember ->
                        removeFromClusterRouters(unreachableMember.member())
                )
                .match(ClusterEvent.MemberLeft.class, memberLeft ->
                        removeFromClusterRouters(memberLeft.member())
                )
                .match(ClusterEvent.MemberDowned.class, memberDowned ->
                        removeFromClusterRouters(memberDowned.member())
                )
                .build();
    }

private void addToClusterRouters(Member member) {
        final ActorSelection cousinRouter = getCousinRouter(member);
        if (cousinRouter.anchor().path().address().host().isDefined()) {
            clusterRoutersOfSameType = clusterRoutersOfSameType.plus(
                    cousinRouter
            );
            log.debug("Added cousinRouter: {} to clusterRoutersOfSameType: {}", cousinRouter, clusterRoutersOfSameType);
        }
    }

private void removeFromClusterRouters(Member member) {
        final ActorSelection cousinRouter = getCousinRouter(member);
        clusterRoutersOfSameType = clusterRoutersOfSameType.minus(
                cousinRouter
        );
        log.debug("Removed cousinRouter: {} from clusterRoutersOfSameType: {}", cousinRouter, clusterRoutersOfSameType);
    }

private ActorSelection getCousinRouter(Member member) {
        final String name = getContext().getSelf().path().name();

        // each node has a ChannelRouterActor at "/user/SELServer/" + name, so select that path
        return getContext().actorSelection(member.address() + "/user/SELServer/" + name);
    }

Questions

  1. Why do we have bursts of the mentioned DEBUG logs?
  2. Why is there one instance of LocalActorRef which occupies so much heap?
  3. How can we get rid of the DEBUG log bursts and how can we fix the memory issue so gc does not take more than 700 millis?

Please let us know if you need more information.

Thanks

4 posts - 2 participants

Read full topic

`forward` pattern in Typed

$
0
0

Hi,

In classic, you could have actor A ask actor B, B could forward that request to C, and C would respond to A (on B's behalf).

A” would ask with a timeout, and handle the same, but B would simply forward the request without any timeout management.

In typed, I understand that there would need to be an additional step of translating C's protocol to B's before replying to A.

How can I “forward” a request to another actor in Akka Typed without having a redundant ask/timeout in the communication between B and C in this example? Is this possible?

3 posts - 2 participants

Read full topic

Java Generic Command for EventSource Actor : Jackson serialization

$
0
0

Hi, I am using Jackson serialization for all the actor message communication. The configuration is working fine, but the message is de-serialize into a Scala Hashmap instead of the right type:

Below is my command class hierarchy :

public interface Command extends JacksonSerializable {
}
@Getter
public class ActorCommand<T> implements Command {
    final ActorRef<?> replyTo;
    final T t;

    public ActorCommand(T t, ActorRef<?> replyTo){
        this.t = t;
        this.replyTo = replyTo;
    }
}
public class InitActor<InitMessage> extends ActorCommand<InitMessage> {
    @JsonCreator
    public InitActor(InitMessaget, ActorRef<?> replyTo) {
        super(t, replyTo);
    }
}

InitMessage.java has data which has to read and acted upon.
I end up with below error when this message is de-serialize on another actor
In my actor :

private Effect<Event, State> initializeActor(State state, InitActor cmd) {
    InitActor<InitMessage> command = cmd;
    InitMessage mesg = command.getT();
    ......
    ...

Below is the error I see:

java.lang.ClassCastException: class scala.collection.immutable.HashMap cannot be cast to class com.model.InitMessage 
(scala.collection.immutable.HashMap and com.model.InitMessage are in unnamed module of loader 'app')
	

Essentially, “T” in my ActorCommand is converted to HashMap.
Appreciate any help here.

1 post - 1 participant

Read full topic

How to find cluster sharded entity Actors by entity id

$
0
0

Hi ,

We have a requirement where a kafka message from external system is suppose to update one unique Sharded entity. The message has information on the unique entity id key.
what is the recommended pattern to find a sharded entity actor by entity id ? I understand Receptionist can always give a list of actorRef, but how to find a particular entity with actorRef. Request<–>Response on the whole list will be too heavy.

2 posts - 2 participants

Read full topic


RoleLeaderChanged appoints 2 leaders during start-up of a node

$
0
0

Akka 2.6.8
Java 8

I have found another source of unstable behavior in our production cluster.

Following situation:

  1. I have 2 running nodes A and B. Node A has been appointed as the leader by evaluating the RoleLeaderChanged event.
  2. Node B (the one that is not the leader) gets restarted.
  3. During start-up node B gets appointed as a leader via RoleLeaderChanged event. Node A remains leader during this time and does not get any notifications. Some actors now cause damage, because they are running on 2 nodes now.
  4. After a short period of time, node B gets another RoleLeaderChanged event and recognizes node A as the leader now. Now everything is fine, but the leader on node A cannot recover the damage that node B has created, because it does not even get to know that there was a second leader for some time.

Here are the relevant log lines. The node is leader for 5 seconds until it gets the Up status.

2020-09-22T22:08:34.077Z INFO  myown - Handle RoleLeaderChanged, selfAddress=akka://ClusterSystem@100.64.4.57:2551, leaderAddress=akka://ClusterSystem@100.64.4.57:2551, isLeader=true
2020-09-22T22:08:39.477Z INFO  akka.cluster.Cluster - Cluster Node [akka://ClusterSystem@100.64.4.57:2551] - Marking node as REACHABLE [Member(address = akka://ClusterSystem@100.64.0.45:2551, status = Up)].
2020-09-22T22:08:39.478Z INFO  akka.cluster.Cluster - Cluster Node [akka://ClusterSystem@100.64.4.57:2551] - is no longer leader
2020-09-22T22:08:39.479Z INFO  myown - Handle RoleLeaderChanged, selfAddress=akka://ClusterSystem@100.64.4.57:2551, leaderAddress=akka://ClusterSystem@100.64.0.45:2551, isLeader=false

Is this expected behavior? I can certainly evaluate Member.Up events as well, but this makes it much harder to rely on RoleLeaderChanged events. I would have expected that AKKA would not send any RoleLeaderChanged events until a decision can be made. Or if it does then without a leader being set.

2 posts - 2 participants

Read full topic

Akka-Cluster: Decreasing system performance having many active actors

$
0
0

Hello everyone,

we are running an actor system (cluster sharding) having about 5 Million per node. With a constant inbound message rate we are seeing the system slowing down having more and more actors in memory.

few actors: 1 ms
5M actors: > 200 ms

Does the amount of active but unused actors have a significant negative impact on the latency?

System:

  • 5 node cluster
  • akka-cluster-sharding
  • akka-persistence-cassandra

F.e. does the dispatcher need to check 5M mailboxes when only 1K mailboxes have messages?

Thanks in advance and best regards
Thomas S.

3 posts - 2 participants

Read full topic

Check if an actor (Abstract / sharded / event source) with an ID is already created

$
0
0

Hi ,

Is there a way to check if the actor is already created, using a unique ID ?

We have a requirement, to follow certain different behaviors with new and existing actors.

Thanks.

1 post - 1 participant

Read full topic

How to exit stream after n elements recieved?

$
0
0

Hello, I’m brand new to Akka and I’m just trying to get the hang of it.

As an experiment, I want to read from a Kinesis stream and collect n messages and stop.

The only one I found that would stop reading records was Sink.head(). But that only returns one record, I’d like to get more than that.

I can’t quite figure out how to stop reading from the stream after receiving the n messages though.

Here’s the code I have tried so far

  @Test
  public void testReadingFromKinesisNRecords() throws ExecutionException, InterruptedException {
    final ActorSystem system = ActorSystem.create("foo");
    final Materializer materializer = ActorMaterializer.create(system);

    ProfileCredentialsProvider profileCredentialsProvider = ProfileCredentialsProvider.create();

    final KinesisAsyncClient kinesisClient = KinesisAsyncClient.builder()
        .credentialsProvider(profileCredentialsProvider)
        .region(Region.US_WEST_2)
            .httpClient(AkkaHttpClient.builder()
                .withActorSystem(system).build())
            .build();

    system.registerOnTermination(kinesisClient::close);

    String streamName = "akka-test-stream";
    String shardId = "shardId-000000000000";

    int numberOfRecordsToRead = 3;

    final ShardSettings settings = ShardSettings.create(streamName, shardId)
            .withRefreshInterval(Duration.ofSeconds(1))
            .withLimit(numberOfRecordsToRead) // return a maximum of n records (and quit?!)
            .withShardIterator(ShardIterators.latest());

    final Source<Record, NotUsed> sourceKinesisBasic = KinesisSource.basic(settings, kinesisClient);

    Flow<Record, String, NotUsed> flowMapRecordToString = Flow.of(Record.class).map(record -> extractDataFromRecord(record));
    Flow<String, String, NotUsed> flowPrinter = Flow.of(String.class).map(s -> debugPrint(s));
//    Flow<String, List<String>, NotUsed> flowGroupedWithinMinute =
//        Flow.of(String.class).groupedWithin(
//            numberOfRecordsToRead, // group size
//            Duration.ofSeconds(60) // group time
//        );

    Source<String, NotUsed> sourceStringsFromKinesisRecords = sourceKinesisBasic
        .via(flowMapRecordToString)
        .via(flowPrinter);
//        .via(flowGroupedWithinMinute); // nope

    // sink to list of strings
//    Sink<String, CompletionStage<List<String>>> sinkToList = Sink.seq();
    Sink<String, CompletionStage<List<String>>> sink10 = Sink.takeLast(10);
//    Sink<String, CompletionStage<String>> sinkHead = Sink.head(); // only gives you one message

    CompletionStage<List<String>> streamCompletion = sourceStringsFromKinesisRecords
        .runWith(sink10, materializer);
    CompletableFuture<List<String>> completableFuture = streamCompletion.toCompletableFuture();
    completableFuture.join(); // never stops running...
    List<String> result = completableFuture.get();
    int foo = 1;
  }

  private String extractDataFromRecord(Record record) {
    String encType = record.encryptionTypeAsString();
    Instant arrivalTimestamp = record.approximateArrivalTimestamp();
    String data = record.data().asString(StandardCharsets.UTF_8);
    return data;
  }

  private String debugPrint(String s) {
    System.out.println(s);
    return s;
  }

Thank you for any clues

1 post - 1 participant

Read full topic

Akka Cluster Connection Refused Between Machines

$
0
0

I am crossposting this issue between here and StackOverflow (https://stackoverflow.com/questions/64108688/akka-cluster-connection-refused-between-machines) since this is specific to Akka.

I am attempting to make a project using Akka Clustering, and have been using the akka-cluster-sample-scala from Lightbend(https://github.com/akka/akka-samples/tree/2.6/akka-sample-cluster-scala) as a base. As it lacks much direct information on connecting across a network, I modified the application.conf to look more like this:

akka {
  actor {
    provider = cluster

    serialization-bindings {
      "sample.cluster.CborSerializable" = jackson-cbor
    }
  }
  remote {
    artery {
      canonical.hostname = "127.0.0.1"
      canonical.port = 0
    }
  }
  cluster {
    seed-nodes = [
      "akka://ClusterSystem@131.194.71.132:25251",
      "akka://ClusterSystem@131.194.71.132:25252",
      "akka://ClusterSystem@131.194.71.133:25251",
      "akka://ClusterSystem@131.194.71.133:25252"]
    downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
  }
}

When run across these two machines, Akka fails to be able to connect over TCP between them, leading to the following warnings:

[info] [2020-09-28 14:34:37,877] [WARN] [akka.stream.Materializer] [] [ClusterSystem-akka.actor.default-dispatcher-5] - [outbound connection to [akka://ClusterSystem@131.194.71.132:25251], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(131.194.71.132:25251,None,List(),Some(5000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused

Is there anything notably wrong that may be causing this, or something more specifically needing to be reconfigured in order to allow connection over TCP between these machines?

1 post - 1 participant

Read full topic

Viewing all 1359 articles
Browse latest View live