Quantcast
Viewing all articles
Browse latest Browse all 1367

Issue while uploading compressed file to S3 using Alpakka

I tried to upload contents of a compressed tar file into S3 using Alpakka but only 1-2 entries were copied, rest of them were skipped.
When I increased the chunck size to a big number (double of file size in bytes), it worked but I suspect if the tar file size is too big then it will fail. Is it expected or I have missed something?
Below is my code:

lazy val fileUploadRoutes: Route = {
withoutRequestTimeout
withoutSizeLimit {
  pathPrefix("files") {
    post {
      path("uploads") {
        extractMaterializer { implicit materializer =>
          fileUpload("file") {
            case (metadata, byteSource) =>
              val uploadFuture = byteSource.async
                .via(Compression.gunzip(200000000))
                .via(Archive.tarReader()).async
                .runForeach(f => {

                  f._2.runWith(s3AlpakkaService.sink(FileInfo(UUID.randomUUID().toString, f._1.filePath, metadata.getContentType)))

                })
              onComplete(uploadFuture) {
                case Success(result) =>
                  log.info("Uploaded file to: " + result)
                  complete(StatusCodes.OK)
                case Failure(ex) =>
                  log.error(ex, "Error uploading file")
                  complete(StatusCodes.FailedDependency, ex.getMessage)
              }
          }
        }
      }
    }
  }
}

}

1 post - 1 participant

Read full topic


Viewing all articles
Browse latest Browse all 1367

Trending Articles