Quantcast
Channel: Akka Libraries - Discussion Forum for Akka technologies
Viewing all 1359 articles
Browse latest View live

Getty very large amount of missed events for currentEventsByTag query

$
0
0

Hi ,

I encountered a very odd issue when using cassandra persistence query, currentEventsByTag.

Relevant log: java.lang.RuntimeException: 97124 missing tagged events for tag [TAG]. Failing without search.

My configuration is pretty basic and per reference conf.

events-by-tag {
bucket-size = "Hour"
eventual-consistency-delay = 2s
flush-interval = 25ms
pubsub-notification = on
first-time-bucket = "20201001T00:00"
max-message-batch-size = 20
scanning-flush-interval = 5s
verbose-debug-logging = true
max-missing-to-search = 5000
gap-timeout = 5s
new-persistence-id-scan-timeout = 0s
}

We are deploying on aws keyspaces, atm there are 7-8 persistent actors at the moment.
We are using currentEventsByTag(with Offset) to implement a data polling endpoint.

This issue only happens when offset is set to the very end of the previous hour, i.e. 59th minute and 31+s of previous hour. After clock moves another hour, retrying the same query will not cause this error.
Example: if now is 14:05:00 and I try to get data from 13:59:59, I will get error in logs that 97124 events are missing, but after we “fixed” this it turns out that there were only 59 events for that query, which was confirmed in the database.

There were 2 ways to “fix” this, either setting max-missing-to-search to 100k or 1M, which would be fine I guess if I knew that this number(missing events) will never grow from 97124 or to set new-persistence-id-scan-timeout = 0s per advice in akka gitter chat.

I choose the latter but I am still not sure that it’s a correct fix.

I kindly ask you to provide any feedback as I couldn’t find anything about this case in documentation. Thanks.

1 post - 1 participant

Read full topic


Actor Timer for long running applications?

$
0
0

I apologize if this question or something similar was asked here before. I tried to quickly scan the discussions here, before posting my question.

My question here is related to the Actor Timers. So, going over the documentation here: https://doc.akka.io/docs/akka/current/scheduler.html
Specifically where it talks about the Scheduler’s limitations " The Akka scheduler is not designed for long-term scheduling …The maximum amount of time into the future you can schedule an event to trigger is around 8 months"

The question is does the same or some comparable limitation apply to the Actor Timers too? And, what are other limitations of the Timers? Can it be used in long-running applications, where the timer could run “forever”, assuming the actor itself doesn’t die, because the timer is tied to the Actor lifecycle.

Just an FYI, my intention is to use a long running scheduler or timer system in Actor where it would message itself. I see the akka-quartz-scheduler as an alternative, but in some cases I need something executing at an interval of 100s of milliseconds, which is not supported by the quartz scheduler, it appears. And as far as I know, you can’t tell the quartz scheduler easily or in a nice way to run something every so-and-so minutes, starting now (i.e. dealing with the cron expression dynamically looks yucky)

regards,
Alex

3 posts - 2 participants

Read full topic

Akka streams graph stage stuck if onPull doesn't push

$
0
0

Hi all,

I’m developing a custom Akka source using GraphStage in Java.
I’m noticing that if during onPull(), no element is pushed, the stage is stuck.
Is my understanding incorrect that onPull() should be called again if no element was pushed?

Thanks!

{
                setHandler(out, new AbstractOutHandler() {
                    @Override
                    public void onPull() throws Exception {
                        if (count++ > 1000) {
                            completeStage();
                            return;
                        }
                        if (count % 2 != 0) {
                            Integer n = rand.nextInt(100);
                            buffer.add(n);
                            push(out, n);
                        } else {
                            // if this block is entered, source will be stuck
                            return;
                        }
                    }
                });
            }

5 posts - 3 participants

Read full topic

Is the Typed StashBuffer capacity pre-allocated?

$
0
0

In Akka Typed, the StashBuffer has a maximum capacity. What is the reason for this? Will increasing the capacity decrease performance, independent of the number of elements actually in the buffer?

In other words, will a StashBuffer with a capacity of 100 be faster than a StashBuffer with capacity 100000 if both buffers contain the same number of elements (eg. 99 each)?

Sometimes I don’t care about the upper bound of the buffer (just like I don’t care about the .size of a Scala List) can I just set the capacity to max. then?

2 posts - 2 participants

Read full topic

Akka-persistence-cassandra: Getting "Unable to find missing tagged event" when replaying events

$
0
0

Hi, we are observing some issues with akka-persistence-cassandra.

When we have a lot of events and replay them, we hit this line (missing.deadline.isOverdue()) and get an IllegalStateException on this line.

I wish I could provide more information on this, but I am not very familiar with the internals so that I could understand which data is relevant to investigating. Could someone help debug/fix this, or can I file an issue if this is an unknown bug? If you need more information please let me know.

1 post - 1 participant

Read full topic

Akka HTTP 10.1.13 Released

$
0
0

Dear hakkers,

We are happy to announce the 10.1.13 release of Akka HTTP. This release is the 13th update in the 10.1.x series of Akka HTTP.

This is a small maintenance release which fixes a bug in the client connection pool and optimizes some cases of discarding entities.

At this point, development has shifted to 10.2.x and only important fixes will be backported to 10.1.x. Any new features will only become available in 10.2.x, so make sure to start migration to 10.2.x soon.

Changes since 10.1.12

For a full overview, you can also see the 10.1.13 milestone.

akka-http-core

  • Don’t fail pool slot after previous connection failed in special condition #3021
  • Make HttpEntity.Strict.discardBytes a no-op #3329

Credits

For this release we had the help of 2 contributors – thank you all very much!

commits  added  removed
     11    804      588 Johannes Rudolph
      3     40        3 Arnout Engelen

Akka by Lightbend

The Akka core team is employed by Lightbend. If you’re looking to take your Akka systems to the next level, let’s set up a time to discuss our enterprise-grade expert support, self-paced education courses, and technology enhancements that help you manage, monitor and secure your Akka systems - from development to production.

Happy hakking!

– The Akka Team

1 post - 1 participant

Read full topic

Lagom configuration to enable netty hostname verification

Testing Akka HTTP in an actor with ScalaTestWithActorTestKit

$
0
0

I have a simple actor functioning as a proxy to a model serving service.

object ModelService {

  sealed trait Command extends NoSerializationVerificationNeeded

  sealed trait Request[R <: Reply] extends Command {
    def replyTo: ActorRef[R]
  }

  case class GetPrediction(replyTo: ActorRef[Reply], htmlInput: String, probabilityThreshold: Float)
      extends Request[Reply]

  sealed trait Reply extends NoSerializationVerificationNeeded

  case object ModelOffline extends Reply
  case class Prediction(probIndexVec: Vector[(Float, Int)]) extends Reply
  case class ModelError(error: String) extends Reply

//   case class Context(replyTo: ActorRef[Reply])
  def apply(modelServiceHost: String, port: Int, path: String): Behavior[Command] = {

    val QueueSize = 10

    Behaviors.setup { context =>
      implicit val system = context.system.toClassic
      import system.dispatcher // to get an implicit ExecutionContext into scope

      val poolClientFlow =
        Http()(system).cachedHostConnectionPool[ActorRef[Reply]](modelServiceHost, port)

      def createRequest(predictionCommand: GetPrediction): (HttpRequest, ActorRef[Reply]) = ???

      def parseResponse(response: HttpResponse): Either[ModelError, Prediction] = ???

      val queue = Source
        .queue[GetPrediction](QueueSize, OverflowStrategy.dropNew)
        .map(createRequest)
        .via(poolClientFlow)
        .to(Sink.foreach({
          case (Success(resp), replyTo) => parseResponse(resp).fold(replyTo ! _, replyTo ! _)
          case (Failure(e), replyTo)    => replyTo ! ModelError("failed to get a response from the model service")
        }))
        .run()

      Behaviors.receiveMessage {
        case cmd @ GetPrediction(replyTo, htmlInput, probabilityThreshold) =>
          queue.offer(cmd)
          Behaviors.same

      }

    }
  }

}

when I want to test it in this

import akka.actor.testkit.typed.scaladsl.ScalaTestWithActorTestKit
import org.scalatest.wordspec.AnyWordSpecLike
import org.scalatest.BeforeAndAfterAll
import org.scalatest.matchers.should.Matchers
import com.typesafe.config.ConfigFactory

class ModelServiceSpec extends ScalaTestWithActorTestKit() with AnyWordSpecLike with BeforeAndAfterAll with Matchers {

  override def afterAll(): Unit = testKit.shutdownTestKit()

  "the model service" when {

    val modelService = testKit.spawn(ModelService("127.0.0.1", 8080, "/model"), "model-service")
    val probe = testKit.createTestProbe[ModelService.Reply]()

    "a valid request" should {

      "get response from the model-serving server" in {
        modelService ! ModelService.GetPrediction(probe.ref, "this is a test scala question", 0.7f)
        probe.expectMessage(ModelService.ModelError("still testing"))
      }

    }

  }

}

I saw

[2020-11-20 22:08:17,219] [DEBUG] [akka.remote.artery.Decoder] [] [ModelServiceSpec-akka.actor.default-dispatcher-5] - Decoded message but unable to record hits for compression as no remoteAddress known. No association yet? {akkaAddress=akka://ModelServiceSpec@127.0.0.1:2551, sourceThread=ModelServiceSpec-akka.remote.default-remote-dispatcher-11, akkaSource=Decoder(akka://ModelServiceSpec), sourceActorSystem=ModelServiceSpec, akkaTimestamp=21:08:17.218UTC}
[2020-11-20 22:08:17,219] [WARN] [akka.remote.artery.InboundHandshake$$anon$2] [] [ModelServiceSpec-akka.actor.default-dispatcher-5] - Dropping Handshake Request from [akka://ModelServiceSpec@127.0.0.1:2551#1138052464865087236] addressed to unknown local address [akka://nt-ui@127.0.0.1:2551]. Local address is [akka://ModelServiceSpec@127.0.0.1:2551]. Check that the sending system uses the same address to contact recipient system as defined in the 'akka.remote.artery.canonical.hostname' of the recipient system. The name of the ActorSystem must also match. {akkaAddress=akka://ModelServiceSpec@127.0.0.1:2551, sourceThread=ModelServiceSpec-akka.actor.internal-dispatcher-4, akkaSource=InboundHandshake$$anon$2(akka://ModelServiceSpec), sourceActorSystem=ModelServiceSpec, akkaTimestamp=21:08:17.218UTC}

it seems to me that the prob is using address akka://ModelServiceSpec@127.0.0.1:2551 where the real actor is using akka://nt-ui@127.0.0.1:2551.
“nt-ui” is my project’s name.

can someone explain the meaning of “Check that the sending system uses the same address to contact recipient system as defined in the ‘akka.remote.artery.canonical.hostname’ of the recipient system. The name of the ActorSystem must also match.”

thanks!

3 posts - 1 participant

Read full topic


How to override supervisory strategy of classic actor from typed one?

$
0
0

According to akka documentation it is possible to “spawn and supervise typed child from classic parent, and opposite”. But how can I override the standard supervisory strategy “restart” of a classic actor from its typed parent? Behaviors.supervise needs a Behavior as an argument which a classic actor does not have.

1 post - 1 participant

Read full topic

How do I configure a Priority Mailbox (Akka Typed)?

$
0
0

How do I configure the message priorities of a Priority Mailbox (eg UnboundedStablePriorityMailbox) in Akka Typed? Is that done through the configuration file, or at “runtime” through instantiating / extending the Mailbox? Cheers.

PS: I’m positive it can be done at runtime, but I was wondering if I’m missing something, since they all have a Configuration name

1 post - 1 participant

Read full topic

Best way of creating a JSON RequestEntity for the client http call

$
0
0

I’d like to make a client http call providing JSON RequestEntity with some serialized java object

I assume that doing this serialization explicitly in a blocking way is not the best option, is it?

ObjectMapper om = ...;
HttpRequest.create()
   .withMethod(POST)
   .withUri(...)
   .withEntity(HttpEntities.create(APPLICATION_JSON, om.writeValueAsString(obj)));

So I wonder what is the recommended way of doing this?
Can Materializer be used somehow?

1 post - 1 participant

Read full topic

Vector Clock for Akka Actor

Bug or by design behavior ? Stoping a stream via KillSwitch + map combination

$
0
0

Given akka doc I would expect the stream to stop after 7th/8th element. Why is it not stopping ? It continues all the way to the last element (20th).

What I want to achive is that on system terminate, the stream stops requesting new elements and the system to wait termination until all elements in the stream are fully processed (reach the sink)

object StreamKillSwitch extends App {

implicit val system = ActorSystem(Behaviors.ignore, “sks”)
implicit val ec: ExecutionContext = system.executionContext

val (killStream, done) =
Source(1 to 20)
.viaMat(KillSwitches.single)(Keep.right)
.map(i => {
system.log.info(s"Start task $i")
Thread.sleep(100)
system.log.info(s"End task $i")
i
})
.toMat(Sink.foreach(println))(Keep.both)
.run()

CoordinatedShutdown(system)
.addTask(CoordinatedShutdown.PhaseServiceUnbind, “stop-receiving”) {
() => Future(killStream.shutdown()).map(_ => Done)
}

CoordinatedShutdown(system)
.addTask(CoordinatedShutdown.PhaseServiceRequestsDone, “wait-processing-complete”) {
() => done
}

Thread.sleep(720)

system.terminate()
Await.ready(system.whenTerminated, 5.seconds)
}

also on stackoverflow: https://stackoverflow.com/questions/65062099/gracefully-stopping-an-akka-stream/

2 posts - 2 participants

Read full topic

Exclude self node form Distributed PubSub in cluster

$
0
0

Hi Guys!

Is it possible to exclude self node from message distribution in PubSub in cluster?

Thanks in advance

4 posts - 2 participants

Read full topic

Work pulling: unexpected RequestNext messages

$
0
0

I’m using the work pulling mechanism to implement a dynamic, distributed pool of workers. I’ve read the documentation and the relevant sample code, but while testing some failure modes I’ve encountered an unexpected behavior.

I’ve reduced my context to the smallest possible one:

I’ve modified my worker so that it always fails by throwing a RuntimeException. Since it’s configured with the restart supervision strategy, the worker actor will always restart when it fails.

Reading the documentation this is what I expect:

  1. System starts
  2. The producer receives a RequestNext message for the only active worker.
  3. I send a job to the producer, so that it’s sent to the idle worker.
  4. The worker receives the job but fails and doesn’t send the confirm message to the ConsumerController.
  5. Since the WorkPullingProducerController has tracked that job as unconfirmed, it’s informed by the infrastracture that the worker it has sent that job to has been restarted, and it doesn’t have other workers to send that message to, it re-sends the unconfirmed message to the restarted worker.

And this is precisely what happens. However I’d have also expected that during this process the producer wouldn’t receive any further RequestNext message, since there is only one worker and it’s working on something already. Instead, I see this behavior:

  1. Producer: receives a RequestNext
  2. Producer: uses RequestNext.sendNextTo to send the job to the worker.
  3. Producer: almost immediately received another RequestNext
  4. Consumer: fails, then restarts
  5. Producer: receives another RequestNext
  6. Consumer: receives the same job, fails and restarts again
  7. Producer: receives another RequestNext

The producers received more and more RequestNext messages even if there is only one worker and it does use the sendNextTo ActorRef only of the first one. Why is that?

For the time being I’ve modified my code so that the worker never fails, always replies with the ConsumerController.Confirmed message and tracks the successful or failed execution elsewhere. However I’d like to really understand how the work pull pattern works in the above scenario.

1 post - 1 participant

Read full topic


Artery serialization exception for object that isn't part of an actor message

$
0
0

Using akka 2.6.10 classic

I’m getting java.io.NotSerializableException that originates from akka.remote.MessageSerializer$.serializeForArtery

I understand how to make this class serializable, but…
by design and based on code inspection and searching for references with IntelliJ, this class should never be part of an actor message, much less a remote actor message.

Any tips on debugging are appreciated.

TIA,
-david

1 post - 1 participant

Read full topic

Alpakka Kafka and Manual Offset Management

$
0
0

Hi everybody,

I try to understand the Manual Offset Management of the Alpakka Kafka but I have some problems to understand the concepts…

When I read the existing API, I have a feeiling Manual Offset method on the Source.scala are mainly designed to handle External Offset Management but not for Business Logic deciding to commit the Offset in Kafka managed Offset management.

I have used in many Projeckt (without Akka und Alpakka Kafka), Kafka’s Manual Offset Management to takes advantage in long running processes, in the case of Business Logic success signals a commit of the Kafka Offset, to mark the message as successfully processed.

Now I can implement the same kind of the logic with Akka/Kafka combination, without using Alpakka (writing a Kafka Consumer, sending the message to Akka with an Ask, delivering the ‘offset’ as payload and returning the ‘offset’ in response payload to the ask in the case of a Business Logic success), but my main motivation is to use Alpakka is to take advantage of the Backpressure mechanisms of the Alpakka.

But if I look toi the methods in the Source,scala, ‘plainPartitionedManualOffsetSource’ and ‘committablePartitionedManualOffsetSource’, they give me the impression they are there for the external offset management but not really for Commiting the offset depending the result of the Business Case.

To be more concrete, this is an Alpakka Stream configuration that works for me at the moment,

   val control : Consumer.DrainingControl[Done] =
      Consumer
        .sourceWithOffsetContext(consumerSettings, Subscriptions.topics("myTopic"))
        .mapAsync(streamConfigProperties.getAkkaStreamParallelism) { consumerRecord =>
          val myAvro : myAvro = consumerRecord.value().asInstanceOf[myAvro];
          askUpdate(myAvro)
        }
        .via(Committer.flowWithOffsetContext(CommitterSettings(AkkaSystem.system.toClassic)))
        .toMat(Sink.ignore)(Consumer.DrainingControl.apply)
        .run()

which works but as I mentioned I try to convert this to

   val control : Consumer.DrainingControl[Done] =
      Consumer
        .committablePartitionedManualOffsetSource(
          consumerSettings,
          Subscriptions.topics("myTopic"),
          partitions => getOffsetsOnAssign(partitions, consumerSettings),
          partitions => Set[TopicPartition]()
        )
        .map {
          source =>
            source._2.mapAsyncUnordered(streamConfigProperties.getAkkaStreamParallelism) {
              message =>
                val myAvro : MyAvro = 
                         message.record.value().asInstanceOf[myAvro];
                askUpdate(myAvro, message.committableOffset)
                  .map(response =>
                    response match {
                      case i1: MyActor.ProcessCompleteResponse =>
                        message.committableOffset
                      case unh @ _ =>
                        AkkaSystem.mySystem.log.info("Business Case says we can't commit")
                        null
                    }
                  )
            }.runWith(Committer.sink(CommitterSettings(AkkaSystem.mySystem.toClassic)))
        }
        .toMat(Sink.ignore)(Consumer.DrainingControl.apply)
        .run()
def getOffsetsOnAssign(partitions : Set[TopicPartition], consumerSettings : ConsumerSettings[String, SpecificRecord]) : Future[Map[TopicPartition, Long]] =
    Future {
      partitions
    }.map(partitions => {
      val kafkaConsumer: org.apache.kafka.clients.consumer.Consumer[String, SpecificRecord] = 
            consumerSettings.createKafkaConsumer()
      val mapOffsets : util.Map[TopicPartition, OffsetAndMetadata] = 
            kafkaConsumer.committed(partitions.asJava)

      var finalMap : Map[TopicPartition, Long] = Map[TopicPartition, Long]()
      mapOffsets.forEach((key, value) => {
          if(value != null) {
            finalMap += (key -> value.offset())
          } else {
            finalMap += (key -> 0L)
          }
        }
      )

      finalMap
    })

According to my Tests this works too, but I am not sure this is the correct way to do this and may be a more compact code can be created for it.

And actually, I am not sure what is expected from us if ‘committablePartitionedManualOffsetSource’ ‘onRevoke’ occurs.

Any comments or suggestions?

1 post - 1 participant

Read full topic

Unexpected behavior when connecting JMS consumer source to JMS producer sink

$
0
0

Hi,

I’m new to Alpakka (and JMS). I’m attaching a minimal example where I publish messages to topic1 using a tick source (1 message per second) and a JmsProducer sink. I then subscribe to topic1 using a JmsConsumer source and print out the messages as they arrive. This works as expected, i.e., I see one message per second printed.

I then connect the consumer source for topic1 to a producer sink for topic2. I would expect to see messages arriving in topic2 at the same rate as for topic1. Instead, no messages are published to topic2 at all and instead the source for topic1 seems to get short-circuited and produces messages at a rate of 100s per second. Adding a throttle between topic1 consumer and topic2 producer reduces the rate to 1/second again for topic1, but still no messages arrive in topic2.

What am I missing here?

package com.example

import akka.actor.typed.ActorSystem
import akka.actor.typed.scaladsl.Behaviors

import akka.stream.scaladsl.Source;

import akka.stream.alpakka.jms.JmsConsumerSettings;
import akka.stream.alpakka.jms.JmsProducerSettings;
import akka.stream.alpakka.jms.JmsTextMessage
import akka.stream.alpakka.jms.scaladsl.JmsConsumer;
import akka.stream.alpakka.jms.scaladsl.JmsProducer;

import scala.concurrent.duration._

import javax.jms.TextMessage

object Main extends App {

  implicit val system = ActorSystem(Behaviors.empty, "AkkaQuickStart")
  implicit val ec = system.executionContext

  val url =
    "tcp://localhost:61616" // running ActiveMQ broker in Docker container

  val connectionFactory: javax.jms.ConnectionFactory =
    new org.apache.activemq.ActiveMQConnectionFactory(url)

  val producerSink = JmsProducer.sink(
    JmsProducerSettings(system, connectionFactory).withTopic("topic1")
  )

  // this works - publishes 1 message / second to `topic1`
  Source
    .tick(0.seconds, 1.second, "hi")
    .map(JmsTextMessage(_))
    .runWith(producerSink)

  val consumerSource = JmsConsumer(
    JmsConsumerSettings(system, connectionFactory).withTopic("topic1")
  ).collect { case message: TextMessage =>
    JmsTextMessage(message)
  }

  // this also works - consumes messages as they arrive in `topic1`
  consumerSource.runForeach(t =>
    println(s"consumer of topic1: ${t} - ${t.body}")
  )

  val producerSink2 = JmsProducer.sink(
    JmsProducerSettings(system, connectionFactory).withTopic("topic2")
  )

  // this doesn't work - doesn't publish to `topic2`; instead short-circuits `topic1` producer to produce 100s of messages per second.
  consumerSource.runWith(producerSink2)

1 post - 1 participant

Read full topic

Preserving the state of cluster singleton

$
0
0

Is there a way to transfer the state from an old singleton instance to a new one when the oldest node in the cluster changes?

1 post - 1 participant

Read full topic

Akka HTTP 10.2.2 released

$
0
0

Dear hakkers,

We are happy to announce the 10.2.2 release of Akka HTTP. This release is the second update in the 10.2.x series of Akka HTTP.

Changes since 10.2.1

For a full overview you can also see the 10.2.2 milestone.
Notably, we have made various improvements to the HTTP/2 server support
while we continue to prepare for providing HTTP/2 support at the client as
well.

As of this version, it is no longer strictly necessary to depend no a separate
‘akka-http2-support’ artifact to get HTTP/2 support in Akka HTTP. We will
continue to publish an empty artifact with that name to make it easy to
upgrade, however.

Changes since 10.2.1

akka-http-core

  • Allow illegal header keys to be ignored #3133
  • HttpCookie.copy was calling itself recursively #3670
  • Don’t dispatch onResponseEntity… events for entities that don’t have to be subscribed #3574
  • Add modeled ‘TE’ header #3616, #3618
  • Add parens to deprecation messages for bind-methods #3547
  • Use official version of SocketUtil in tests #3558
  • Link to https rather than http #3663
  • Avoid temporaryServerHostnameAndPort #3691

akka-http

  • Optimize header directives by avoiding HttpHeader.unapply #3591

akka-http-marshallers

  • Update jackson-dataformat-xml from 2.10.5 to 2.10.5.1 #3695

akka-http-testkit

  • Fall through on AssertionErrors in testkit #3512
  • Fail noisily if trying to use ~> together with transparent-head-requests #3569

docs

  • Add example for Get request with query parameters #3624
  • Add akka-actor-typed dependency to introduction.md #3565
  • Typo fix: “the onne” --> “the one” #3521
  • Some docs gardening #3544
  • Link project info to snapshots #3681
  • Release notes for 10.1.13 #3635
  • Add section about how to configure snapshots #3642
  • Add example for Get request with query parameters #3624
  • Update formFields example #3650
  • Verify signatures from the markdown actually exist #3654
  • Add link from formFields to parameters directive docs #3655
  • Use explicit paths to snippets #3653
  • Fix the Bid class. #3661

akka-http2-support

  • Extract common rendering + add userAgent on requests #3657
  • Continue accepting window updates even when done sending #3619
  • Support custom HTTP methods for HTTP/2 #3622
  • More robust handling of IncomingStreamBuffer.onPull #3621
  • Simplify Http2Substream hierarchy trading a bit of type-safety for simplicity #3605
  • More useful event info for stream state changes #3643
  • Add some more basic HTTP/2 client tests #3644
  • Configurable ping support through demuxer #3617
  • Close the substreams earlier from the demuxer #3647
  • HTTP/2 client test coverage (and support for) TLS session info #3668
  • Handle RST while sending data + more tests #3671
  • Fix lost ‘pull’ when closing while waiting for window #3672
  • Send and enforce max concurrent streams setting #3529
  • Unify incoming and outgoing state machine #3552
  • Disable Push (send SETTING) #3555
  • Fix Scala 2.12 compilation error #3582
  • Remove explicit parallelism parameter in HTTP/2 #3545
  • Respect remote max concurrent streams (reprise) #3581
  • Also go through state machine for resetStream #3584
  • Fix leakage of incoming substreams in edge cases #3233
  • Run handler code in its own task on the server #3593
  • Check HTTP2 headers for correctness #3603
  • HTTP/2 connection level API #3511
  • Accepting trailing headers in the HTTP/2 client #3602
  • Protocol-level client specs #3532
  • Move some public API bits to its final places and mark them as @ApiMayChange#3526
  • Use LogHelper in demux classes #3506
  • Remove some minor repetition in HTTP/2 rendering #3229
  • Test server responses to invalid ping frames #3523
  • Use scheme portion of target URI in :scheme pseudo-header #3535
  • Add test for receiving WINDOW_UPDATE in HalfClosedRemoteWaitingForOutgoingStream state #3554
  • Make ‘network’ and ‘user’ sides more explicit #3638
  • streamId attribute is not needed / supported on client side #3652

build

  • Akka HTTP BOM #3665
  • Add 10.2.1 to MiMa #3499
  • Increase paradox parsing timeout #3508
  • Also aggregate akka-http-bench-jmh, to fail if benchs fail to compile #3561
  • Exclude akka-http-bench-jmh from whitesource #3562
  • Disable whitesource for submodules which are not to be published #3564
  • Remove sbt-javaagent #3598
  • Include stack traces in scalatest failures #3612
  • Add 10.1.13 to mima #3641

And various updates:

  • Update akka to 2.5.32 #3558
  • Update caffeine from 2.8.5 to 2.8.7 #3515
  • Update junit from 4.12 to 4.13.1 #3517
  • Update paradox-theme-akka, … from 0.35 to 0.36#3697
  • Update sbt-bintray from 0.5.6 to 0.6.1 #3572, #3596
  • Update sbt-mima-plugin from 0.8.0 to 0.8.1 #3548
  • Update sbt-scalafix, scalafix-core, … from 0.9.21 to 0.9.23 #3614
  • Update specs2-core from 4.10.3 to 4.10.5 #3518, #3550
  • Update spray-json from 1.3.5 to 1.3.6 #3629

Credits

The complete list of closed issues can be found on the 10.2.2 milestone on GitHub.

For this release we had the help of 17 contributors – thank you all very much!

commits  added  removed
     38   1177     1029 Johannes Rudolph
     36   2454     1579 Arnout Engelen
     12   1366      180 Johan Andrén
      7    773      389 Ignasi Marimon-Clos
      3     15       25 Enno Runne
      1    123       16 Christof Nolle
      1     33        2 Nikhil
      1     22        0 Artur Soler
      1      4        4 Philippus Baalman
      1      3        2 Andrea Peruffo
      1      3        2 Sungho Hwang
      1      2        2 Nathaniel Fischer
      1      2        2 Michael Simons
      1      2        2 nitikagarw
      1      1        1 gnp
      1      1        1 Nitika Agarwal
      1      1        1 Roberto Leibman

Akka by Lightbend

The Akka core team is employed by Lightbend. If you’re looking to take your Akka systems to the next level, let’s set up a time to discuss our enterprise-grade expert support, self-paced education courses, and technology enhancements that help you manage, monitor and secure your Akka systems - from development to production.

Happy hakking!

– The Akka Team

1 post - 1 participant

Read full topic

Viewing all 1359 articles
Browse latest View live