Quick pro tips for Bitbucket pipeline

Tingli Tan
3 min readMay 8, 2020

--

5 minutes quick discussion of Bitbucket pipeline tips and pitfalls

I work for an AWS premium partner company. Customers use all type of CICD pipelines. One of the best solutions is from Bitbucket. It’s quite powerful and easy to use comparing to AWS own solutions like CodeBuild and CodePipeline

I found the best way to learn things much quicker is to see an example first then try to understand the top level concept and basic usage before deep dive to the details.

Here is one example of the bitbucket pipeline file bitbucket-pipelines.yaml

Some quick tips here:

  • You can define a default docker image at the top
  • For each step, you can define a different docker image to run that step
  • Each line under “script” is a shell command. Bitbucket prints out the line and exam their return value, if the command return non-zero value then the pipeline will stop and fail.
  • You can combine multiple lines together like these:
    ( either use && or “- |” )

If you have sharp eyes, you might be wondering where those variables like ${AWS_ACCESS_KEY_ID} and ${AWS_SECRET_ACCESS_KEY} come from

Bitbucket has a feature called pipeline deployments environments

You can access it from here:

By default, Bitbucket gives your 3 environments (Test, Staging and Production). You can change the names or even remove some of them

Like in one of my project, I changed them to dev and common

Then you can define environment variables for each environment. If you define the variable as ‘Secured’, then it will be encrypted and won’t be able to display in your pipeline

Now we have Deployment Environments, we can add it to each steps

The lines in step2 in previous example shows how to use the deployment environments

- step:
name: "Deploy to PROD"
deployment: production

Pro Tips:

  • deployment statement has to be under ‘step’. If you put it like the following, it won’t work
deployment: production
- step:
name: "Build and push"
script: xxx
- step:
name: "Deploy to PROD"
  • You won’t be able to print out the ‘Secured’ ENV variables. For example, if you echo $AWS_ACCESS_KEY_ID in your script section, it will print an empty line but the variable is working fine, you just can’t see it
  • If you are using shell script in the ‘steps’
    All the deployment variables are set in your script sections automatically.
  • If you use “pipe” in your steps, you need to pass the env variables like below (snippet from the example code above)
- pipe: atlassian/aws-eks-kubectl-run:1.1.1
variables:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}

Lets talk about another useful feature called Artifacts.

Imagine you have a pipeline which has 2 steps. The first one uses a NodeJs docker image to build a node app and generated files in ./dist folder. Now you need to use another AWS CLI container to send the files to AWS S3 buckets.

The question is how can I pass the files in ./dist folder from step1 to step2 They are using totally different docker images/containers

The solution is to use Artifacts. You can define the artifacts like below so Bitbucket will “keep” them among steps

Pro Tips:

  • The path to the artifacts are from the repo’s root directory, not from the directory the script is working on.
    For example, if your script is like this:
script:
- cd my-sub-dir
- npm pack
artifacts:
- my-app-*.tgz

You will not find the files npm pack generated simply because the path has to start from the repo’s directory and not ‘current directory’

So the fix is just to change the path in artifacts section:

script:
- cd my-sub-dir
- npm pack
artifacts:
- my-sub-dir/my-app-*.tgz

Here is the snippet for the whole pipeline

You can find more AWS related samples here

Hope this will save your time and efforts.
Thanks for reading

--

--

Tingli Tan
Tingli Tan

Written by Tingli Tan

Principal Technology Architect at TELUS

Responses (1)