AWS Developer Tools Blog

Best Practices for Local File Parameters

If you have ever passed the contents of a file to a parameter of the AWS CLI, you most likely did so using the file:// notation. By setting a parameter’s value as the file’s path prepended by file://, you can explicitly pass in the contents of a local file as input to a command:

aws service command --parameter file://path_to_file

The value passed to --parameter is the contents of the file, read as text. This means that as the contents of the file are read, the file’s bytes are decoded using the system’s set encoding. Then as the request is serialized, the contents are encoded and sent over the wire to the service.

You may be wondering why the CLI does not just send the straight bytes of the file to the service without decoding and encoding the contents. The bytes of the file must be decoded and then encoded because your system’s encoding may differ from the encoding the service expects. Ultimately, the use of file:// grants you the convenience of using files written in your preferred encoding when using the CLI.

In versions 1.6.3 and higher of the CLI, you have access to another way to pass the contents of a file to the CLI, fileb://. It works similiar to file://, but instead of reading the contents of the file as text, it is read as binary:

aws service command --parameter fileb://path_to_file

When the file is read as binary, the file’s bytes are not decoded as they are read in. This allows you to pass binary files, which have no encoding, as input to a command.

In this post, I am going to go into detail about some cases of when to use file:// over fileb:// and vice versa.

Use Cases Involving Text Files

Here are a couple of the more popular cases for using file:// to read a file as text.

Parameter value is a long text body

One of the most common use cases for file:// is when the input is a long text body. For example, if I had a shell script named myshellscript that I wanted to run when I launch an Amazon EC2 instance, I could pass the shell script in when I launch my instance from the CLI:

$ aws ec2 run-instances --image-id ami-b66ed3de 
    --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
    --user-data file://myshellscript

This command will take the contents of myshellscript and pass it to the instance as user data such that once the instance starts running, it will run my shell script. You can read more about the different ways to provide user data in the Amazon EC2 User Guide.

Parameter requires JSON input

Oftentimes parameters require a JSON structure as input, and sometimes this JSON structure can be large. For example, let’s look at launching an EC2 instance with an additional Amazon EBS volume attached using the CLI:

$ aws ec2 run-instances --image-id ami-b66ed3de 
   --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
   --block-device-mappings '[{"DeviceName":"/dev/sdf","Ebs":{"VolumeSize":20,"DeleteOnTermination":false,"VolumeType":"standard"}}]'

Notice that the --block-device-mappings parameter requires JSON input, which can be somewhat lengthy on the command line. So, it would be convenient if you could specify the JSON input in a format that is easier to read and edit, such as in the form of a text file:

[
  {
    "DeviceName": "/dev/sdf",
    "Ebs": {
      "VolumeSize": 20,
      "DeleteOnTermination": false,
      "VolumeType": "standard"
    }
  }
]

By writing the JSON to a text file, it becomes easier to determine if the JSON is formatted correctly, and you can work with it in your favorite text editor. If the JSON above is written to some local file named myinput.json, you can run the same command as before using the myinput.json file as input to the --block-device-mappings parameter:

$ aws ec2 run-instances --image-id ami-b66ed3de 
   --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
   --block-device-mappings file://myinput.json

This becomes especially useful if you plan to reuse the input.json file for future ec2 run-instances commands since you will not have to retype the entire JSON input.

Use Cases Involving Binary Files

For most cases, file:// will satisfy your use case for passing the contents of a file as input. However, there are some cases where fileb:// must be used to pass the contents of the file in as binary as opposed to as text. Here are a couple of examples.

AWS Key Management Service (KMS) decryption

KMS is an AWS service that makes it easy for you to create and control the encryption keys used to encrypt your data. You can read more about KMS in the AWS Key Management Service Developer Guide. One service that KMS provides is the ability to encrypt and decrypt data using your KMS keys. This is really useful if you want to encrypt arbitrary data such as a password or RSA key. Here is how you can use KMS to encrypt data using the CLI:

$ aws kms encrypt --key-id my-key-id --plaintext mypassword
   --query CipherTextBlob --output text

CiAxWxaLB2LyTobc/ppFeNcSLW/abxdFuvBdD3IBtHBTYBKRAQEBAgB4MVsWiwdi8k6G3P6aRX
jXEi1v2m8XRbrwXQ9yAbRwU2AAAABoMGYGCSqGSIb3DQEHBqBZMFcCAQAwUgYJKoZIhvcNAQcBM
B4GCWCGSAFlAwQBLjARBAyE/taUnrxXzSqa1+8CARCAJSi8/E819toVhfxm2A+T9mFdOfnjGuJI
zGynaCB3FsPXnrwl7vQ=

This command uses the KMS key my-key-id to encrypt the data mypassword. However, in order for the CLI to properly display content, the encrypted data output from this command is base64 encoded. So by base64-decoding the output, you can store the data as a binary file:

$ aws kms encrypt --key-id my-key-id --plaintext mypassword
   --query CipherTextBlob 
   --output text | base64 --decode > my-encrypted-password

Then if I want to decrypt the data in my file, I can use KMS to decrypt my encrypted binary:

$ echo "$(aws kms decrypt  --ciphertext-blob fileb://my-encrypted-file 
   --query Plaintext --output text | base64 --decode)"
mypassword

Since the file is binary, I use fileb:// as opposed to file:// to read in the contents of the file. If I were to read the file in as text via file://, the CLI would try to decode the binary file using my set system encoding. However since the binary file has no encoding, decoding errors would be thrown:

$ echo "$(aws kms decrypt  --ciphertext-blob file://my-encrypted-file 
   --query Plaintext --output text | base64 --decode)"

'utf8' codec can't decode byte 0x8b in position 5: invalid start byte

EC2 User Data

Looking back at the EC2 user data example from the Parameter value is a long text body section, file:// was used to pass the shell script as text to --user-data. However in some cases, the value passed to --user-data is a binary file.

One limitation of passing user data when launching an EC2 instance is that the user data is limited to 16 KB. Fortunately, there is a way to help avoid reaching this limit. By utilizing the cloud-init package on EC2 instances, you can gzip-compress your cloud-init directives because the cloud-init package will decompress the user data for you when the instance is being launched:

$ aws ec2 run-instances --image-id ami-b66ed3de 
    --instance-type m3.medium 
    --key-name mykey 
    --security-groups my-security-group 
    --user-data fileb://mycloudinit.gz

By gzip-compressing the file, the cloud-init directive becomes a binary file. Subsequentially, the gzip-compressed file must be passed to the --user-data using fileb:// in order to read in the contents of the file as binary.

Conclusion

I hope that my examples and explanations helped you better understand the various use cases for file:// and fileb://. Here’s a quick way to remember which file parameter to use: when the content of the file is human readable text, use file://; and when the content is human unreadable binary, use fileb://.

You can follow us on Twitter @AWSCLI and let us know what you’d like to read about next! If you have any questions about the CLI, please get in contact with us at the Amazon Web Services Discussion Forums. If you have any feature requests or run into any issues using the CLI, don’t be afraid to communicate with us via our GitHub repository.

Stay tuned for our next blog post, and have a Happy New Year!