Quantcast
Channel: Akka Libraries - Discussion Forum for Akka technologies
Viewing all 1368 articles
Browse latest View live

How to create singleton parent actor?

$
0
0

@syadav wrote:

Hi, I am new to Akka and developing a spring boot application integrated with Akka.

I have a root actor, which has few child actors. This root actor is the one which receives all the messages then forwards them to child actors based on the type. Now, I want to have only single instance of the root actor in the system, so specified that in the application.conf file. Also, I want to return the same ActorRef of root actor, each time actorOf() is called (since it will receive lot of messages, and I don’t want to create new instance each time , which in turn will create new child actors instances).
So, I created a wrapper over it.

RootActor :

@Component("rootActor")
@Scope("prototype")
public class RootActor extends UntypedActor{

   private static final String addOnsActorName = "addOnsActor";
   private static final String assetsActorName = "assetsActor";

   static volatile ActorRef addOnsActorRef = null;
   static volatile ActorRef assetsActorRef = null;

   @Inject
   private ActorSystem actorSystem;

   @PostConstruct
   public void init() {
      addOnsActorRef = createChild(addOnsActorName);
      assetsActorRef = createChild(assetsActorName);
   }

   public void onReceive(Object arg0) throws Exception {
      if(!(arg0 instanceof Notification)) {
        throw new InvalidMessageException("Invalid message!!");
      }

      Notification message = (Notification)arg0;
      Type type = message.getResourceType();
       if(type=="ADDONS") {
            addOnsActorRef.tell(message, self());
       }
       if(type=="ASSETS") {
            assetsActorRef.tell(message, self());
       }
   }

   public ActorRef createChild(String childName) {
      return getContext().actorOf(SpringExtProvider.get(actorSystem).props(childName).withRouter(new FromConfig()), childName);
   }
}

RootActorWrapper :

@Component
public class RootActorWrapper {

    @Inject
    ActorSystem actorSystem;

    protected ActorRef rootActor = null;

    @PostConstruct
    public void init() throws ActorException {
        try {
            rootActor = actorSystem.actorOf(SpringExtProvider.get(actorSystem)
                    .props("rootActor")
                    .withRouter(new FromConfig()),
                    "rootActor");
       } catch (Throwable e) {
            //handle error
        }
    }

    public void process(Notification notification) {
        rootActor.tell(notification, null);
    }
}

I want to know if this is alright or is there a better way to do this? Reason I want to keep root actor as singleton because it creates child actors (with 20 instances each). Hence I don’t want new instance of root actor to be created due to each actorOf() call.

So is doing this is okay? Or this is an anti-pattern?

Thanks in advance!

Posts: 5

Participants: 2

Read full topic


Akka.cluster.metrics.native-library-extract-folder refers to $user.dir

$
0
0

@surendarchandra wrote:

For some reason, I don’t have user.dir and so I get:

com.typesafe.config.ConfigException$UnresolvedSubstitution: reference.conf @ jar:file:/opt/rubrik/src/java/sd/target/sd-0.1.jar!/reference.conf: 3840: Could not resolve substitution to a value: ${user.dir}
at com.typesafe.config.impl.ConfigReference.resolveSubstitutions(ConfigReference.java:111) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.ConfigConcatenation.resolveSubstitutions(ConfigConcatenation.java:205) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398) ~[sd-0.1.jar:?]
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179) ~[sd-0.1.jar:?]

I tried everything to try to feed it this directory and all fails:
System.setProperty(“user.dir”, “/tmp”)

  • private val config =
  • customConf
  •  .withFallback(ConfigFactory.systemProperties())
    
  •  .withFallback(ConfigFactory.defaultOverrides())
    
  •  .withFallback(ConfigFactory.defaultReference())
    

Other than manually editing reference.conf from the jar, is there a way around this? I am still not sure why I don’t have user.dir. I can see it from Scala repl.sh

Thanks much

Posts: 7

Participants: 2

Read full topic

Akka HTTP 10.1.10 released

$
0
0

@raboof wrote:

Dear hakkers,

We are happy to announce the 10.1.10 release of Akka HTTP. This release is the tenth update in the 10.1.x series of Akka HTTP.

Migration notes

RFC 7231 dictates an HTTP response with status code 205 (‘Reset Content’) is not allowed to have an entity body. Since #2686 we enforce this restriction, so if you (incorrectly) produced such responses you will have to either remove the entity body or select a different status code.

Changes since 10.1.9

For a full overview you can also see the 10.1.10 milestone:

akka-http-core
  • Fix 205 HTTP status not to contain any HTTP entity #2686
  • Support multiple subprotocols in WebSocket handshake #2606
  • Add endsWith predicate to Uri.Path #2480
  • Handle unrecognized status codes according to spec [#2503](https://github.com/akka/akka-http/pull/2503]
  • Better error handling on server when response entity stream fails #2627
  • Force connection closure when pool is stopped #2631
  • Enable log-unencrypted-network-bytes also for websocket client traffic #2647
  • Add modeled header for Content-Location #2540
  • Streamed response processing performance improvements #2645
  • Make custom MediaType and MediaRange.matches case-insensitive #2126
akka-http
  • More precise IllegalArgumentException catch for case class extraction #2593
  • Add logging unsupported content type #2512
  • Widen JavaUUID regexp’s #2624
akka-http2-support
  • Support HTTP2 in cleartext (h2c) via Upgrade from HTTP1 #2464
  • Backpressure incoming frames when too many outgoing control frames are buffered #2706
  • Fix receiving HEADERS with more than one CONTINUATION frame #2701

Credits

The complete list of closed issues can be found on the 10.1.10 milestone on GitHub.

For this release we had the help of 23 contributors – thank you all very much!

commits  added  removed
     39   1319      651 Johannes Rudolph
     23    755      287 Arnout Engelen
      5      9       10 Scala Steward
      3     33       27 ta.tanaka
      2     46       38 tanaka takaya
      2     11       34 Tim Moore
      2     16        7 Philippus
      1    232        9 Josep Prat
      1     65       10 Alex Afanasev
      1     53        0 Jan Ypma
      1     48        2 k.bigwheel
      1     41        8 Mathias
      1     46        3 Roman Brodetski
      1     32        8 Andrejs Pavlovics
      1      8        8 Kamal Raj Sekar
      1     13        1 Sunghyun Hwang
      1      6        5 László van den Hoek
      1      5        1 Patrik Nordwall
      1      2        2 Martynas Mickevičius
      1      2        1 Enno Runne
      1      1        1 Vitalii Lysov
      1      1        1 Akhtiam Sakaev
      1      1        1 PiotrJander

Thanks to Lightbend for their continued sponsorship of the Akka core
team’s efforts. Lightbend offers commercial support
for Akka.

Happy hakking!

– The Akka Team

Posts: 1

Participants: 1

Read full topic

ServiceKey variance

Failure-detector.acceptable-heartbeat-pause does not seem to work

$
0
0

@weicheng113 wrote:

Sorry, ignore this for now. Maybe I made a mistake.

Hi I am running a cluster in AWS EC2 enviroment. I was trying to increase acceptable-heartbeat-pause of cluster failure-detector, by doing the following.

akka.cluster {
  ...
  failure-detector {
    heartbeat-interval = 2s
    threshold = 12.0
    acceptable-heartbeat-pause = 8s
  }
}

But I am still receiving heartbeat warning below:

heartbeat interval is growing too large for address akka.tcp://cluster_name1@10.80.30.60:8300: 3699 millis

It seemed the settings was not gotten applied. Looking at the source code I expected to get warning when interval is bigger than 5000 millis?

akka.remote.PhiAccrualFailureDetector {
...
if (interval >= (acceptableHeartbeatPauseMillis / 3 * 2) && eventStream.isDefined)
            eventStream.get.publish(
              Warning(
                this.toString,
                getClass,
                s"heartbeat interval is growing too large for address $address: $interval millis"))
          oldState.history :+ interval
        }
...
}

Thanks,
Cheng

Posts: 3

Participants: 2

Read full topic

Unsubscribe from receptionist events

$
0
0

@schernichkin wrote:

I want to listen for receptionist events for limited amount of time than unsubscribe from it. Will stopping listening actor (by using Behaviors.stopped) be enough? How do receptionist maintains list of listening actors and what it will do if listening actor will be stopped?

Posts: 2

Participants: 2

Read full topic

Flat json with Akka Stream

$
0
0

@jonatasfreitasv wrote:

Hello guys,

I’m using Play Json to work with json data.

with Akka Stream I have this input:

{
          "sku": 0,
          "buffer": 0,
          "consistency": 0,
          "inner": 0,
          "stores": [
            {
                "store": 1,
                "eoh": 0,
                "store_last_received": "yyyy-MM-dd",
                "store_sales_qtd_last_six_weeks": 0,
                "store_size": "G",
                "store_lead_time": 0,
                "vm": 0
            },
            {
                "store": 2,
                "eoh": 0,
                "store_last_received": "yyyy-MM-dd",
                "store_sales_qtd_last_six_weeks": 0,
                "store_size": "G",
                "store_lead_time": 0,
                "vm": 0
            }
          ]
 }

I need to make this output, one event per “stores” array, with this example, enter 1 event and out 2 events, with this schema:

//// EVENT 1
{
    "sku": 0,
    "buffer": 0,
    "consistency": 0,
    "inner": 0,
    "store": 2,
    "eoh": 0,
    "store_last_received": "yyyy-MM-dd",
    "store_sales_qtd_last_six_weeks": 0,
    "store_size": "G",
    "store_lead_time": 0,
    "vm": 0,

    "forecast": 0,
    "vm_fulfilled": false,
    "consistency_fulfilled": false,
    "final_buffer": 0
}

//// EVENT 2
{
    "sku": 0,
    "buffer": 0,
    "consistency": 0,
    "inner": 0,
    "store": 1,
    "eoh": 0,
    "store_last_received": "yyyy-MM-dd",
    "store_sales_qtd_last_six_weeks": 0,
    "store_size": "G",
    "store_lead_time": 0,
    "vm": 0,

    "forecast": 0,
    "vm_fulfilled": false,
    "consistency_fulfilled": false,
    "final_buffer": 0
}

Posts: 2

Participants: 1

Read full topic

Allocation hotspots of akka-client

$
0
0

@flschulz wrote:

Hello,
in one of my applications I’m using the akka-http client and a few days ago I did some load testing and profiling to see, which components are doing most of the allocations and are slowing down the application. I noticed that around 40% of the allocations are done by the akka-http components. To be able to isolate the problem, I created a simple script which calls a HTTP mock server via Http().singleRequest 100,000 times.

The profiling showed that for these 100,000 calls around 650MB of data was allocated in the
NewHostConnectionPool in the function runOneTransition (package akka.http.impl.engine.client.pool.NewHostConnectionPool). The reason for the allocations are the debug function calls in this function (e.g. see https://github.com/akka/akka-http/blob/master/akka-http-core/src/main/scala/akka/http/impl/engine/client/pool/NewHostConnectionPool.scala#L252). These debug calls trigger many char / String allocations even if you are not using the debug-level logging, because the log level check is only executed in the debug function (but the String allocation gets triggered already before).
To avoid these allocations the log level check could be either inlined or a macro based solution could be used. I check it locally by removing the debug calls and it lead to around 50% shorter and less GC calls and improved in general the allocation profile of the application.

After these allocations were removed I saw that there was a second function call which was allocating around 400MB of objects for these 100,000 requests. The reason is this function call: https://github.com/akka/akka-http/blob/master/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolInterface.scala#L168. This is calling the UriParser.parseHost() function which is in the end allocating akka.parboiled2.ValueStack objects. Based on this it looks like that parsing the Host is done very frequently, although I was only calling one host in my test script. I’m not totally into the internals of akka-http, but based on what I understood is that it is using host-connection-pools. Wouldn’t it be possible to parse the host only once per host-connection-pool and reuse that to avoid doing this for every request?

I searched in the open issues and also in the forum, but didn’t found anything so my question in general is if these two hotspots are already known? By improving these two allocation hotspots this would reduce the overall allocations by more than 50% and would improve the overall performance of the client implementation. I wanted to share these findings here.

I have tested this with scala 2.12.8, akka-http 10.1.10 and akka-streams 2.5.25.

Best regards and thanks in advance!

P.S. I wanted to share more references to the actual code, but as a new user I’m only allowed to share two URLs. :wink:

Posts: 4

Participants: 3

Read full topic


Akka 2.6.0-M8 released

$
0
0

@patriknw wrote:

Dear hakkers,

The eighth development milestone for Akka 2.6 is out.

It would be excellent if you can try the milestones out and give us feedback. We intend to publish the first release candidate around 3 weeks from now. Akka 2.6 is binary backwards compatible with 2.5 with the ordinary exceptions listed in the documentation. Some configuration changes may be needed, please read the migration guide as a first step.

Some notable changes in 2.6.0-M8:

  • Remove ActorPublisher and ActorSubscriber, which have been deprecated since 2.5.0 #26187
  • Add positive and negative TTL config for AsyncDnsResolver #27578 thanks to @burbaki
  • Fix JsonFraming to accept multiple top-level arrays #26099
  • New messages for DataDeleted in Distributed Data #27371
  • ActorContext as constructor parameter in AbstractBehavior #27689
  • Many documentation and migration guide improvements, such as the documentation of SLF4J logging in Akka Typed

A total of 48 issues were closed since 2.6.0-M7. The complete list can be found on the 2.6.0-M8 milestone on github.

Credits

For this release we had the help of 16 committers – thank you all very much!

commits  added  removed
     24   6108     4503 Patrik Nordwall
     17   1688     1679 Johan Andrén
     13   2755     2007 Helena Edelson
      4      4        4 Scala Steward
      2    284      459 Renato Cavalcanti
      2    129       13 Arnout Engelen
      1    617      194 Nicolas Vollmar
      1     76        2 Christopher Batey
      1     37       38 tanaka takaya
      1     27        4 Roman Filonenko
      1      6        6 Taeguk Kwon
      1      5        5 Jakob Merljak
      1      3        3 Mahmut Bulut
      1      2        2 dsebban
      1      1        2 Ignasi Marimon-Clos
      1      1        1 Ethan Atkins

Thanks to Lightbend for their continued sponsorship of the Akka core team’s efforts. Lightbend offers commercial support for Akka.

Happy hakking!

– The Akka Team

Posts: 1

Participants: 1

Read full topic

HTTP Management and health checks

$
0
0

@lay wrote:

I am using the HTTP Management in an existing HTTP server adding the routes in the following way:

restServer.addRoute(ClusterHttpManagementRoutes.all(Cluster.get(getContext().getSystem())));

For the cluster management routes (/cluster/members …) that works perfectly. But the health check routes are not added. Is this the expected behaviour or am I doing something wrong?

I was able to use the health check routes when I started the Akka Management on a separate HTTP server:

AkkaManagement.get(getContext().getSystem()).start();

But I really would like to provide these checks in my existing HTTP server. Did anybody manage to add the health check routes to an existing HTTP server and can point me into the correct direction?

Thanks,
Lay

Posts: 3

Participants: 2

Read full topic

Error kernel pattern and scheduler

$
0
0

@freedev wrote:

Hi All,

I have implemented the error kernel pattern in a sample I was trying to create.

The idea was to create a conversation between 3 actors but one does not respond immediately, you need to send multiple message before receive an answer. The implementation should be resilient even if the actors runs in a distributed environment.

So I’ve create a child actor which retries using a scheduler and, when the answer finally arrives, the child notify is parent and stops.

Even if this solution works well I have few doubts.

First into the child actor I’ve implemented a scheduler that executes a function every 50 milliseconds, I have few doubts about the execution context. I mean if the code inside the method sendMessage can modify its own actor?

When an actor schedule an execution, what happens, waits that the actor ends its own work?

  import scala.concurrent.ExecutionContext.Implicits.global
  import scala.language.postfixOps

  var cancellableSchedule : Option[Cancellable] = None

  var counter = 0
  var maxCounter = 10

  def receive = LoggingReceive {
    case r:MessageB2C_Ack => {
      log.info("ActorChildB - Received MessageB2C_Ack from " + sender())
      parentActor ! r
      context.stop(self)
    }
    case r:SendMessage => {
      log.info("ActorChildB - Received SendMessage from " + sender())
        sendScheduledMessage
      }
   }

  private def sendScheduledMessage(): Unit = {
    import scala.concurrent.ExecutionContext.Implicits.global
    import scala.language.postfixOps
    cancellableSchedule =  Option(context.system.scheduler.schedule(0 milliseconds, 50 millisecond){
      sendMessage()
    })
  }

   private def sendMessage(): Unit = {
      log.info("ActorChildB - sendMessage " + msg.getClass.getName + " to " + dest)
      if (counter < maxCounter) {
        dest ! msg
        counter = counter + 1
      } else  {
        throw new MyRetryTimeoutException("Fine")
      }
   }

  override def postStop(): Unit = {
    cancellableSchedule.foreach(c => c.cancel())
    super.postStop()
  }

Posts: 6

Participants: 2

Read full topic

`majority-min-cap` default value

$
0
0

@asavelyev wrote:

We in our team are currently thinking if we dare to change the default setting

akka.cluster.sharding.distributed-data.majority-min-cap = 5

Set in config https://github.com/akka/akka/blob/master/akka-cluster-sharding/src/main/resources/reference.conf#L157

Which frequently makes Akka sharding stuck on rolling update of <5 nodes cluster.

I wonder, what is the specific “bad” scenario this value should prevent? Why simple majority, e.g. 3/5, don’t work for small clusters (so majority-min-cap would be something like 2)?

Posts: 1

Participants: 1

Read full topic

Amazon MSK (Kafka) and Alpakka-Kafka

$
0
0

@scotartt wrote:

Hi guys

Are there any traps, gotchas, or caveats I need to watch out for if I’m going to use the Alpakka Kafka connector with the Amazon MSK product (which is the AWS-managed Kafka installation)?

Does anyone have any experience with this combination?

Thanks
scot

Posts: 1

Participants: 1

Read full topic

Alpakka Kafka 1.1.0-RC2

$
0
0

@ennru wrote:

Dear hakkers,

We are happy to announce Alpakka Kafka 1.1.0-RC2.

We are bumping the minor version number because of the changes to the internals of how offset commits are sent to the Kafka broker that improve performance in high throughput scenarios.

This RC2 contains adds a new source for advanced usage: committablePartitionedManualOffsetSource which may be used when offsets are stored external to Kafka, but tools for consumer lag which rely on offsets being committed to Kafka are in use.

Please see the full release notes in the Alpakka Kafka documentation.

Alpakka Kafka 1.1.0 final will be released next week if no blocker issues will be reported with this release candidate.

Happy hakking!

– The Alpakka Team

Posts: 1

Participants: 1

Read full topic

Mute log warning when Connection attempt failed

$
0
0

@MeniSamet wrote:

Hi,
We use Akka HTTP for the client and getting log warning

Connection attempt failed. Backing off new connection attempts for at least XXX milliseconds.

Since we have a monitoring system that connected to our logging event bus, this appear as a warning, whilout the ability to mute that like in akka.http.parsing.illegal-header-warnings.

There is an option to handle that issue?

Thanks,
Meni.

Posts: 1

Participants: 1

Read full topic


Stream framing without delimiter or length field

$
0
0

@ecartner wrote:

My apologies if this a silly question. I’ve searched and searched without any luck. Maybe my search term skills are lacking.

I want to take a stream of bytes and convert it to a stream of frames. Unfortunately there is no single “length field” in the frame. However I can calculate the frame size from four fields inside the four byte header at the start of every frame.

I tried using the variant of akka.stream.scaladsl.Framing.lengthField() that takes a computeFrameSize function as an argument. It seems that even though I’m providing a function to turn the 32 bits of the header into a frame size, lengthField() takes a look at the header (which will always start with 0xFFF), says “Hey! That’s a negative number. That can’t be a valid length field.” and fails. Because the 0xFFF sync word is 12 bits long and I need the very first bit after it, I can’t simply skip over the sync word. Thus I’m always going to get a value that lengthField() is going to interpret as a negative number.

Was it intentional for lengthField() to make sure the field was a valid length even when provided a function to turn it into a frame length? Is there another library element that would be able to do the stream of bytes to stream of frames transformation I’m trying to do? Would the simplest solution be to just make my own framing function based on the code for lengthField()?

Any help would be greatly appreciated.

Thanks,

-Eric

Posts: 4

Participants: 2

Read full topic

Kafka streams creating multiple producer threads

$
0
0

@kolapkardhaval wrote:

I am running alpakka kafka producer in akka stream and multiple producer processes are getting run:

“kafka-producer-network-thread | producer-20” #49 daemon prio=5 os_prio=31 tid=0x00007f80562c1800 nid=0x8903 runnable [0x000070000e3f9000]

I am using singleton and not creating multiple kafka producers instances but still seeing 10k kafka-producer-network-thread created. This is causing servers to go to 99% cpu usage.

Code is
CompletionStage<Done> done = Source.range(1, 100) .map(number -> number.toString()) .map(value -> new ProducerRecord<String, String>(topic, value)) .runWith(Producer.plainSink(producerSettings, kafkaProducer), materializer);

Do I need to do Producer.close()? If so, where shall I do in this stream?

A few more things I tried:

  1. Is there a way to not provide the producer from outside? Is it using flexi flow or akka.kafka.javadsl.Producer.plainSink(producerSettings)? If so, I tried that too and it still created the network threads.
  2. For producer from outside, do we close it after sending every msg or few msgs and have a new instance created again? Is there any example documentation I can refer to for closing the producer?
  3. Is there a way to limit the threads?

Posts: 2

Participants: 2

Read full topic

Streaming text over http from a Source.tick

$
0
0

@dleacock wrote:

I’m creating small demos to develop my understanding of streaming objects over http using akka. My end goal is a route that streams a generated object (an image from a webcam to be precise) over http. My attempt at a smaller version of this is a route that contains a Source.tick with a call to a method that returns a string.

My route:

 path("test", () ->
                        get(() -> extractRequest(request -> complete(testHandler.handleTextSource(request))))
                ),

My handler

 public HttpResponse handleTextSource(HttpRequest httpRequest) {
        final Source<ByteString, Cancellable> source
                = Source.tick(Duration.ofSeconds(1), Duration.ofSeconds(2), getText()).map(ByteString::fromString);

        HttpEntity.Chunked textEntity = HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, source);
        return HttpResponse.create().withEntity(textEntity);
    }

public String getText() {
        System.out.println("getText()");
        return "text";
    }

I ran a curl command on this route and notice getText() is called only once and nothing is displayed in the output. Once I kill the server all of a sudden a number of responses arrive in the terminal. From looking at the twitter streaming example online I noticed they use completeOkWithSource. In the example they use a JsonEntityStreamingSupport and a Jackson.marshaller().

a) Why is source.tick not being called every duration? I would expect getText() to be called every 2 seconds. Am I not using the source correctly?
b) If I use a source do I need to use completeOkWithSource, if so must I create my own EntityStreamingSupport (creating my own requires I implement many methods I don’t understand so I was avoiding this).

Thank you very much
David

Posts: 2

Participants: 1

Read full topic

Constructing Akka HTTP requests for JSON Lines

$
0
0

@huntc wrote:

The doco is fabulous when explaining how to unmarshal a stream of JSON Lines object from an HTTP response using Spray JSON, but I couldn’t find anything on marshaling when constructing a request. I discovered the following approach, but wonder if it is the “blessed” way:

  /*
   * A JsonPrinter that produces compact JSON source without any superfluous whitespace.
   */
  private object JsonLinesCompactPrinter extends CompactPrinter {

    import java.lang.StringBuilder

    override protected def printArray(elements: Seq[JsValue],
                                      sb: StringBuilder): Unit =
      if (sb.length() > 0) {
        sb.append('[')
        printSeq(elements, sb.append(','))(print(_, sb))
        sb.append(']')
      } else {
        printSeq(elements, sb.append('\n'))(print(_, sb))
      }
  }

…and then to use:

    implicit val jsonLinesPrinter: JsonPrinter = JsonLinesCompactPrinter

    Marshal(events).to[RequestEntity]...

Thoughts? Is there a built-in way to do what I’m looking for?

Thanks.

Posts: 1

Participants: 1

Read full topic

Cannot extract the lastSequenceNumber

$
0
0

@pawelkaczor wrote:

Hi,
In some rare circumstances (I haven’t investigated it closer) I’m getting the following error when calling EventSourcedBehavior.lastSequenceNumber:

akka.actor.typed.internal.BehaviorImpl$ReceiveMessageBehavior
java.lang.IllegalStateException: Cannot extract the lastSequenceNumber in state akka.actor.typed.internal.BehaviorImpl$ReceiveMessageBehavior
	at akka.persistence.typed.scaladsl.EventSourcedBehavior$.lastSequenceNumber(EventSourcedBehavior.scala:109)
	at akka.persistence.typed.javadsl.EventSourcedBehavior.lastSequenceNumber(EventSourcedBehavior.scala:198)

The method lastSequenceNumber is called from the command handler (of EventSourcedBehavior) when creating effect chain with the following helper method:

	protected EffectBuilder<Event, State> persist(List<Event> events) {
		long lastSeqNr = lastSequenceNumber(this.ctx);
		return Effect().persist(events).thenRun(s -> {
			forEach(withPosition(events), (eventPos, event) -> {
				long eventSeqNr = lastSeqNr + eventPos + 1;
				logEvent(event, eventSeqNr);
			});
		});
	}

Am I doing something wrong here?

Posts: 4

Participants: 2

Read full topic

Viewing all 1368 articles
Browse latest View live