Native Aws Mental Model
As a new application owner in Native AWS what am I getting in to?
You as a service owner need to handle your infrastructure. When we say infrastructure, we mean anything you create in your own AWS Accounts. (This includes, but is not limited to: Lambda functions, ECS clusters, EC2 Instances, VPCs, IAM Roles, AutoScaling Groups, CloudWatch Log Groups, Subnets etc.) You’ll own it forever and ever. This implies we need some tool to automate it. In the most general case CloudFormation will be our tooling of choice, and we can use higher level tools like the CDK to generate that CloudFormation.
Your investment in automation will be super high. If you haven’t already, we’ll be adopting CDK or LPT to handle the automation of internal things and glueing them to external (AWS) resources.
At the end of the day, We’re aiming for you to be able to deploy your service to a new region with as few button clicks as possible, but keep our expected level of operational excellence, security, scalability, and availability in mind. We want to make deploying a new service or expanding to new regions delightful, safe, and easy.
What is included in the BONES CLI sample applications?
These applications are made up of: Octane templates: We vend packages to you through Octane that constitute a simple, funtional application of whatever type you have selected. Infrastructure as code: Our goal is 0 console clicks for users. We want you to get the full “here’s how you can do this all through code” approach. We’re there for isengard, and super close for Conduit. Nothing forces you to use the BONES sample applications, we’re just sitting atop of Pipelines, BATS, BARS, and a few other services, and everything is managed in code. It’s just all pre-wired and set up. The CLI creates some packages, and simplifies wiring up the Pipelines/BATS/BARS integration. That’s it. Nothing more.
What are all the systems that theses sample applications tie together?
Let’s break down the pieces in play: BONES CLI - A CLI that will ask you a few questions, and generate a sample application as a getting started place. It uses all the tools below to get a working application as quickly as possible. BrazilBuildSystem — Manages your dependencies. We still have Config files, so nothing changes there. Brazil builds your CloudFormation stack using either the CDK or CfnBuild to build up the CloudFormation that will be deployed in your pipeline. Pipelines — Internal pipelines (as opposed to CodePipeline ), our internal continuous delivery tool. Version Set — The simplest definition: a grab bag of dependencies you can build against. That is, if you say in your Config file that you need “Spring = 4.0.x”, then you’d better have that in your version set. Otherwise we can’t build it because we don’t know where to get your dependencies from (don’t worry, perl 5.8 comes for free! /jokes). Version sets allow us to know exactly what version of a package we’re building against, and what source that version came from. Version Set Revision — also known as “VSR”. This is just your version set at a point in time. Every change to a version set (including builds) introduces a new VSR. We can roll back to a VSR if things go poorly. BATS (Build Artifact Transformation Service) — a service that transforms a VSR to a thing (usually a zip file) we can deploy to Native AWS. BATS uses something called a transform package (aka, a YAML file) to figure out what it should transform your VSR to. BARS (Build Artifact Replication Service) — a service that replicates the artifacts (zip files) to different S3 buckets/AWS accounts you own (basically, aws s3 cp as a service). CloudFormation — Infrastructure as code; given a YAML/JSON file, make AWS resources. Octane — a service that can create initial service setup. For our definition, a way to make Brazil packages from templates. Under the hood of the BONES CLI, we use Octane to actually generate the packages. RDE (Rapid Dev Environment) – Allows you to test your ECS, Lambda, and CodeDeploy applications locally, prior to pushing anything to Gitfarm, or even in to an AWS account. How do all these systems interact with each other to deploy to Native AWS (a little deeper now)? The BONES CLI sample application start as an Octane Template (you can see our Octane templates here ). These are just files with .erb extensions that, given some parameters, will make Brazil packages with your parameters (you can build your own Octane templates!). Once you use Octane, you get a few local packages that are customized to your service. You again use Octane to promote these packages (this makes them go live in Brazil, until you run promote, they are local only), then we will begin to create the Builder Tools and AWS resources needed to create a functional application. In order to allow your pipeline and other internal resources to interact with your account has an initial CloudFormation stack we call the bootstrap stack (the stack is a CloudFormation stack in this case). You’ll see this stack in your AWS account in CloudFormation called BONESBootstrap. This stack is super-special, it’s what links AWS to our internal tool chain. We do this by using something called IAM Roles. An IAM role is one example of an IAM Identifier . A role is special, it is a credential that can be used by other AWS accounts/services. They’re used extensively in AWS. You control who can use (assume) your role by adding a trust policy. The “BONESBootstrap” stack definition has a few roles in it. Each of those roles trusts an internal service (BATS/BARS/Pipelines) to do work for you. The roles also whitelist a “pipeline ID” that’s allowed to use them (this ensures that someone else can’t use their pipeline to deploy to your account). You will then use CDK or LPT to generate your pipeline. Once we have a fully hydrated pipeline, enter Brazil, Version Sets, BATS and BARS… Your pipeline was generated with a list of “packages to auto build” to Pipelines. In the sample applications, we have configured the locations such that once you turn on your pipeline, the AWS resources you specify in your application will be be created via Pipelines orchestration. Pipelines will kick off a build of your packages. During build, Brazil will invoke CDKBuild or CfnBuild. Both of these systems, in the end, generate CloudFormation templates as artifacts to be deployed in your pipeline. Once the build completes, a new VSR is produced. The VSR ID is passed off to BATS by Pipelines. BATS will take that VSR and run brazil-bootstrap against it. This just produces a directory with your package and all of its dependencies. BATS takes that directory and runs zip, or generates a docker image (basically, but this is a huge over-simplification). Finally, BATS places the zipped artifacts and any supporting artifacts that are not into an S3 bucket created by the BONESBootstrap stack. In the case of ECS, a Docker image is then copied in the the ECR repository that was generate during bootstrapping. At this point, Pipelines invokes BARS (Artifact Replication)! BARS takes those BATS produced file and Docker images and copies it to the account that we’re about to deploy to in each of the stages. Finally, Pipelines takes the artifact and invokes CloudFormation, thus deploying your infrastructure. This is the same for Lambda, ECS, and EC2, but what BATS produces is different for each type. And that, my friends, is how it all works. No magic, just a bunch of systems doing system things!
uid: 202007171605 tags: #amazon #literature