AWS Linux EC2 Instance Creation Checklist

This is intended to be an ongoing check list of tasks (with implementation notes) to perform when creating a new Linux instances on AWS

  1. Choose OS (I am trying to use Ubuntu for instances that don’t specifically require RHEL )
  2. Create the instance with desired root volume size.  RHEL seems to create a volume with the specified size, but then allocates a 6GB root partition.
  3. Choose appropriate VPC, subnet (considerations: public, private, AZ, what RDS or other instances need to communicate with it)
  4. Select the appropriate Key (will create additional accounts and assign keys as needed)
  5. Select or Configure Initial security rules
  6. Start instance
  7. Resize root partition if necessary (currently only necessary for RHEL 7 instances)
    1. Stop the instance
    2. If you have done any configuration already, take a snapshot of the volume to protect your work (only if restoring the snapshot would be worthwhile should something go wrong)
    3. Note the mount point and volume name of the root volume
    4. detach the volume and attach it to another instance (preferable running the same OS – spin a temporary one up if needed).  You do not need (nor want) to mount it.
    5. See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage_expand_partition.html#prepare-linux-root-part  (read this to verify the next steps still hold – and to see example output)
    6. make sure that the filesystem is OK BEFORE you resize it.  (the URL above skips this step):   xfs_repair /dev/xvdf1  (for xfs used by RHEL 7) OR for ext4: sudo e2fsck -f /dev/xvdf1  (or whatever device you attached the volume as)
    7. parted /dev/xvdf (or whatever device you attached it as)
    8. (parted) unit s
    9. (parted) print
    10. (parted) rm 1
    11. (parted) mkpart primary 2048s 100% OR (for gpt partitions) (parted) mkpart Linux 4096s 100%
    12. (parted) print
    13. (parted) set 1 boot on
    14. (parted) quit
    15. xfs_repair /dev/xvdf1  (for xfs used by RHEL 7OR for ext4: e2fsck -f /dev/xvdf1  (or whatever device you attached the volume as)
    16. detach the volume and reattach it in the proper place on the new instance
    17. start the instance
  8. Allocate desired swap space as a swap file
    1. This example will assume an 8 GB swap file (/var/swap.1)
    2. sudo  /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=8000
    3. chmod 600 /var/swap.1
    4. sudo /sbin/mkswap /var/swap.1
    5. sudo /sbin/swapon /var/swap.1
    6. To enable it by default after reboot, add this line to /etc/fstab: /var/swap.1 swap swap defaults 0 0
  9. Configure hostname
    1. hostnamectl set-hostname <hostname.FQDN>
    2. add hostname and FQHN to /etc/hosts
    3. modify /etc/cloud/cloud.cfg and comment out the set_hostname, update-hostname, and update_etc_hosts lines
  10. Set timezone
    1. (RHEL 7)  timedatectl set-timezone America/Chicago
    2. ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime    (verify that timedatectl did this for RHEL 7 – if not, do it!)
  11. Create additional accounts
    1. groups ubuntu (or ec2-user)
    2. useradd -s /bin/bash -m -d /home/newuser -G adm,cdrom,floppy,sudo,audio,dip,video,plugdev,netdev newuser
    3. edit /etc/sudoers:
      newuser ALL=(ALL) NOPASSWD:ALLOR change%sudo ALL=(ALL:ALL) NOPASSWD:ALL  (put NOPASSWD: into existing line – for some reason this works for ubuntu but not for the second user without spec’ing this)
    4. mkdir ~newuser/.ssh
    5. vi ~newuser/.ssh/authorized_keys
    6. add in the contents of the newuser.pub file from the my repository
    7. chmod 700 ~newuser/.ssh
    8. chmod 600 ~newuser/.ssh/authorized_keys
    9. chown -R newuser:newuser ~newuser/.ssh
    10. test logging into the account from another machine and verify that “sudo bash” works.
  12. Allow passwords for SSH (ONLY if required/desired)
    1. edit /etc/ssh/sshd_config and set PasswordAuthentication yes
    2. restart sshd
  13. Create and mount additional volumes
  14. Configure automatic backups
    1. set the Autosnapshot tag to True for all volumes that you want backed up automatically
  15. Install additional software
  16. Do initial software update
    1. apt-get update && apt-get dist-upgrade
    2. yum update
  17. Configure automatic updates
    1. RHEL: yum install yum-cron
      1. for RHEL < 7, edit /etc/sysconfig/yum-cron
      2. /etc/init.d/yum-cron start
      3. chkconfig yum-cron on
      4. For RHEL >= 7, edit /etc/yum/yum-cron.conf
      5. if you only want security updates => update_cmd = security
      6. apply_updates = yes (also download_updates)
    2. Ubuntu:
      1. apt-get install unattended-upgrades
      2. dpkg-reconfigure -plow unattended-upgrades
      3. verify /etc/apt/apt.conf.d/20auto-upgrades
      4. verify /etc/apt/apt.conf.d/50unattended-upgrades (verify reboot time and that automatic reboot is true)
  18. Tag volumes for automatic backup
    1. set the Autosnap tag to True
  19. Configure and test additional security rules
  20. Create a restore AMI!  (Since this creates a snapshot, I don’t <think> this will take up a lot of extra space for instances that we plan to backup anyway.)  This is a REALLY good idea since restoring a snapshot relies on you to first create a new instance based off the same AMI as the original and then replace or attach the volume on that instance.  We can’t guarentee that the original instance will still be available.

Leave a Comment