-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to integrate dotnet lambda layers in a CI/CD pipeline #79
Comments
You raise interesting questions. For CI/CD I was imagining layers would be created in a separate pipeline and only creating new ones when the versions of the package references changed. That still leaves the problem you identified what if the build step has no access to AWS to download the manifest for the layer. In your workflow when would the layer be published? It would need to be done as part of the Amazon.Lambda.Tools to make sure the layer description is setup correctly and the layer manifest is uploaded to S3. |
You say that for CI/CD you were imagining that layers would be created in a separate pipeline. I will try to read between the line and see what it would look like. For a lot of folks, I think that a pipeline is tied to a repository and a repository has only one pipeline. So whenever a commit is done in that repository, the pipeline starts. In the case of layers, it would mean that there would be one repository that defines a layer (lets call it the "layer repository") and one (or more) repositories would reference this layer (lets call it the "consumer repository"). So the pipeline of the layer repository would call For teams where dependencies are standardized (ex: everyone must use Newtonsoft vs X, everyone must use FluentValidation version Y, everyone must use AutoMapper version Z, etc), this could be an efficient process. But for other teams where such standardization does not exist, it would be a painful process. In my team for example, we have about 25 .net core micro-services that each have their own set of dependencies. It would be a colossal effort to try to agree on a common set of dependencies. Let's just imagine that we don't have to agree on a common set of dependencies. Instead, we have one layer repository (and pipeline) per consumer repository. So if a developer wants to change the version of newtonsoft, he would need to go in the layer repository, do a commit, start the pipeline, deploy the resulting layer, change the lambda arn in the consumer repository, commit, test it. A little painful :( And now imagine that he then finds out that the new version of newtonsoft does not work in his consumer repository... I'm pretty sure that he would complain that he is losing is time... things could be much simpler for him. Another problem that I see with this approach is that the deployment of layers and lambdas is now dependent on Let's now see an alternative approach that I feel is more CI/CD friendly. In a given repository (i.e. for a given micro-service), I have 2 csproj files:
The layer.csproj contains all the <Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp2.1</TargetFramework>
<PreserveCompilationContext>true</PreserveCompilationContext>
<GenerateRuntimeConfigurationFiles>true</GenerateRuntimeConfigurationFiles>
<AWSProjectType>Lambda</AWSProjectType>
<OutputType>Library</OutputType>
<StartupObject />
</PropertyGroup>
<ItemGroup>
<PackageReference Include="AWSSDK.Lambda" Version="3.3.17.12" />
<PackageReference Include="FluentValidation.AspNetCore" Version="8.0.100" />
<PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.2" />
<PackageReference Include="Amazon.Lambda.AspNetCoreServer" Version="2.1.0" />
<PackageReference Include="Polly" Version="6.1.1" />
</ItemGroup>
</Project> Then, service.csproj only has a <Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>netcoreapp2.1</TargetFramework>
<PreserveCompilationContext>true</PreserveCompilationContext>
<GenerateRuntimeConfigurationFiles>true</GenerateRuntimeConfigurationFiles>
<AWSProjectType>Lambda</AWSProjectType>
</PropertyGroup>
<ItemGroup>
<ProjectReference Include="../LANDR.DownloadBin.Layer/LANDR.DownloadBin.Layer.csproj" />
</ItemGroup>
</Project> Let's now see what the build script of this repository could look like. First, it would need to create a runtime package store for the layer that can later be uploaded to S3. If I understand correctly, the
As already said, I think that a build script rarely has access to AWS and should not be responsible to deploy. It should only generate artifacts. With this in mind, step 1 fits in a build script while step 2 to 5 don't fit. Here's an alternative: instead of the
So it does not use AWS and works perfectly so far. Since the command does not publish, maybe a more proper name would be Then, to package my lambda, instead of using the So here's the resulting artifact that I have:
All this was done without using AWS resources (store & read from S3). These two zips would become build artifacts that can later be used by my deployment pipeline. In my case, I deploy with terraform. My terraform defintion would look like this:
As usual, terraform would use the I think that this workflow is more natural because:
|
Wow, I enabled the
|
@normj I have answered your question above. Maybe a more appropriate title for the issue would be something like "add a new dotnet lambda package-layer command". |
@mabead Thanks for taking the time to write this. I agree with your concerns and your overall flow makes sense. The problem I have though is when In your flow I'm not seeing when this description would ever be set. Without this description the layer is really only good for that single Lambda function deployment where your build system has the context in its pipeline state. |
@normj You are right, I didn't notice the metadata that you encode in the layer description. Ex:
But let's assume that this layer is referenced though a local path like this:
Would you need this meta data? The metadata basically tells the S3 bucket & key where the zip is located but this is not needed when using layers that are available locally. S3 is not needed. Note that I am interested in using lambda layers because of this sentence from this post:
As mentionned over (dotnet/core#1968) & over (shameless plug), cold starts are a blocker for many people to use lambdas. I am hoping to shave a second in the lambda cold start by using layers with pre-jitted code. Let me know if my dreams are realistic or not... I also hope that .NET Core 3 ReadyToRun images may help in improving cold starts. |
👍, @mabead, thanks for creating this issue! |
@rpopovych No, I don't have a workaround. |
@mpuigdomenchSage There's no work planned on my side. |
I've just come up against this.. I'm having to create layers manually and then hard code the ARN into my CI for the |
Hmm. This is rather disappointing :(
This isn't a problem. This just means publishing produces 2 files that are required for lambda layers to work... |
Needs review. This was categorized with large effort based on T-shirt sizing. |
When we added layers support for .NET the big advantage was the ability to prejit the assemblies that went into the layer. Since .NET 6 we have ready to run that does basically the same thing except now for the entire codebase not just the dependencies. In my mind the value of using layers for .NET assemblies seems low and hence this feature request seems low. Are there counter points I'm missing? |
The deployment size. Because of the limits in deployment package size the layers is useful. One thing I did was use puppeteer in .net to take screenshots of a URL (to replace an old plugin for IIS that was abandoned and had a memory leak) and had to use a layer to ship chrome to reduce the package size and avoid the download of chrome when puppeteer started. |
The deployment size including layers is capped at 250MB. There is a 50 MB limit for the zip if you upload the zip straight to Lambda but if you upload to S3 the whole 250 MB can the deployment zip. It seems easier to deal uploading to S3 first then coordinating a new layer being created for every deployment. In your scenario Chrome sounds that sounds like a layer that was created on a separate timeline then the deployment bundle. |
Hmmm been a few years since I ran into this issue but I used to upload to s3 and attach that to the lambda and ran into size issues… honestly haven’t used labmdas in about 2-3 years tho :( |
My current CI/CD looks like this:
dotnet lambda package
command. This generates a zip file in the TeamCity artifacts.I have the feeling that this a pretty standard CI/CD pipeline. I could easilly map this to AWS CodeBuild and AWS Code Pipeline.
I then have some problems figuring how to integrate the
dotnet lambda publish-layer
anddotnet lambda deploy-function
commands in this pipeline. Since both commands operate on my source code, the only option is to integrate them in step 2 (build step). The problem is that this step is only there to build. It is not there to deploy. In fact, it does not have any access to AWS. Furthermore, all my AWS infrastructure is deployed in step 3. It therefore wouldn't make sense to deploy my lambda layers and lambdas in step 2.To resolve this, I have the feeling that a new command like
dotnet lambda package-layer
would be more helpful. It would work in a way similar todotnet lambda package
, i.e.:Then, the
--function-layers
parameters ofdotnet lambda package
could accept this local zip file as an input (i.e. the command would not only accept layer arns).I feel that this would be a more natural way to integrate lambda layers in a CI/CD pipeline.
If you look at the lambda layers integration in terraform, you will see that only nodejs layers are supported. Also, you will see that the
aws_lambda_layer_version
resource takes a zip file as input (filename
). So, if a command likedotnet lambda package-layer
could create a zip file, it could be integrated naturally in a CI/CD pipeline that uses terraform.So, any clarification on how to use lambda layers in a CI/CD pipeline would be appreciated.
The text was updated successfully, but these errors were encountered: