This is intended to be an ongoing check list of tasks (with implementation notes) to perform when creating a new Linux instances on AWS
- Choose OS (I am trying to use Ubuntu for instances that don’t specifically require RHEL )
- Create the instance with desired root volume size. RHEL seems to create a volume with the specified size, but then allocates a 6GB root partition.
- Choose appropriate VPC, subnet (considerations: public, private, AZ, what RDS or other instances need to communicate with it)
- Select the appropriate Key (will create additional accounts and assign keys as needed)
- Select or Configure Initial security rules
- Start instance
- Resize root partition if necessary (currently only necessary for RHEL 7 instances)
- Stop the instance
- If you have done any configuration already, take a snapshot of the volume to protect your work (only if restoring the snapshot would be worthwhile should something go wrong)
- Note the mount point and volume name of the root volume
- detach the volume and attach it to another instance (preferable running the same OS – spin a temporary one up if needed). You do not need (nor want) to mount it.
- See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage_expand_partition.html#prepare-linux-root-part (read this to verify the next steps still hold – and to see example output)
- make sure that the filesystem is OK BEFORE you resize it. (the URL above skips this step): xfs_repair /dev/xvdf1 (for xfs used by RHEL 7) OR for ext4: sudo e2fsck -f /dev/xvdf1 (or whatever device you attached the volume as)
- parted /dev/xvdf (or whatever device you attached it as)
- (parted) unit s
- (parted) print
- (parted) rm 1
- (parted) mkpart primary 2048s 100% OR (for gpt partitions) (parted) mkpart Linux 4096s 100%
- (parted) print
- (parted) set 1 boot on
- (parted) quit
- xfs_repair /dev/xvdf1 (for xfs used by RHEL 7) OR for ext4: e2fsck -f /dev/xvdf1 (or whatever device you attached the volume as)
- detach the volume and reattach it in the proper place on the new instance
- start the instance
- Allocate desired swap space as a swap file
- This example will assume an 8 GB swap file (/var/swap.1)
- sudo /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=8000
- chmod 600 /var/swap.1
- sudo /sbin/mkswap /var/swap.1
- sudo /sbin/swapon /var/swap.1
- To enable it by default after reboot, add this line to /etc/fstab: /var/swap.1 swap swap defaults 0 0
- Configure hostname
- hostnamectl set-hostname <hostname.FQDN>
- add hostname and FQHN to /etc/hosts
- modify /etc/cloud/cloud.cfg and comment out the set_hostname, update-hostname, and update_etc_hosts lines
- Set timezone
- (RHEL 7) timedatectl set-timezone America/Chicago
- ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime (verify that timedatectl did this for RHEL 7 – if not, do it!)
- Create additional accounts
- groups ubuntu (or ec2-user)
- useradd -s /bin/bash -m -d /home/newuser -G adm,cdrom,floppy,sudo,audio,dip,video,plugdev,netdev newuser
- edit /etc/sudoers:
newuser ALL=(ALL) NOPASSWD:ALLOR change%sudo ALL=(ALL:ALL) NOPASSWD:ALL (put NOPASSWD: into existing line – for some reason this works for ubuntu but not for the second user without spec’ing this) - mkdir ~newuser/.ssh
- vi ~newuser/.ssh/authorized_keys
- add in the contents of the newuser.pub file from the my repository
- chmod 700 ~newuser/.ssh
- chmod 600 ~newuser/.ssh/authorized_keys
- chown -R newuser:newuser ~newuser/.ssh
- test logging into the account from another machine and verify that “sudo bash” works.
- Allow passwords for SSH (ONLY if required/desired)
- edit /etc/ssh/sshd_config and set PasswordAuthentication yes
- restart sshd
- Create and mount additional volumes
- Configure automatic backups
- set the Autosnapshot tag to True for all volumes that you want backed up automatically
- Install additional software
- Do initial software update
- apt-get update && apt-get dist-upgrade
- yum update
- Configure automatic updates
- RHEL: yum install yum-cron
- for RHEL < 7, edit /etc/sysconfig/yum-cron
- /etc/init.d/yum-cron start
- chkconfig yum-cron on
- For RHEL >= 7, edit /etc/yum/yum-cron.conf
- if you only want security updates => update_cmd = security
- apply_updates = yes (also download_updates)
- Ubuntu:
- apt-get install unattended-upgrades
- dpkg-reconfigure -plow unattended-upgrades
- verify /etc/apt/apt.conf.d/20auto-upgrades
- verify /etc/apt/apt.conf.d/50unattended-upgrades (verify reboot time and that automatic reboot is true)
- RHEL: yum install yum-cron
- Tag volumes for automatic backup
- set the Autosnap tag to True
- Configure and test additional security rules
- Create a restore AMI! (Since this creates a snapshot, I don’t <think> this will take up a lot of extra space for instances that we plan to backup anyway.) This is a REALLY good idea since restoring a snapshot relies on you to first create a new instance based off the same AMI as the original and then replace or attach the volume on that instance. We can’t guarentee that the original instance will still be available.
0 Comments.