Sunday, November 6, 2016

Customise linux banner and prompt on CentOS 7 using ansible

Modify involved files

Hi everyone, now we will explain how to get our motd and PS1 modified to customise our linux terminal, We use Centos 7 for this post.

sorea@vmjump01 ~$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)

Modifying motd

First we installed figlet by following the command below:

sorea@vmjump01 ~$ sudo yum install figlet

After the installation we executed next command and will have and output as shown below:

sorea@vmjump01 ~$ figlet all IT - Blog











Once we have our banner we can copy it and modify next file as we want

sorea@vmjump01 ~$ cat /etc/motd

e.g. Mine looks like this =D














Modifying .bashrc

To have our prompt like the one shown on the avobe picture (green, white, red and the mode it shows directories we are working in) we need to modify our .bashrc file located at our home, As we are going to customise our enviroment it will always located at ~/.bashrc

sorea@vmjump01 ~$ ls -la .bashrc
-rw-r--r--. 1 sorea sorea 840 Nov  6 06:51 .bashrc

Let's add below block of lines to our file

# Font color
red=$(tput setaf 1)
green=$(tput setaf 2)
yellow=$(tput setaf 3)
blue=$(tput setaf 4)
purple=$(tput setaf 5)
cyan=$(tput setaf 6)
gray=$(tput setaf 7)
redStrong=$(tput setaf 8)
white=$(tput setaf 9)
# Background color
blueB=$(tput setab 4)
grayB=$(tput setab 7)
# Backgrounf no color
sc=$(tput sgr0)
PS1='\[$green\]\u\[$white\]@\[$red\]\h \[$white\]$(IFS=/ d=($PWD); IFS=\n c=${#d[@]}; if [[ $PWD == $HOME ]]; then echo "~"; elif [ $c -gt 5 ]; then echo "/${d[1]}/${d[2]}/.../${d[$c-2]}/${d[$c-1]}"; else echo $PWD; fi)\[$white\]\$ \[$white\]'

Then we execute below command to refresh our enviroment's variables

sorea@vmjump01 ~$ source .bashrc
sorea@vmjump01 ~$

All the steps described above were for a single host, if we want to configure our enviroment in a amountof host we can do it using ansible.

Configure using ansible


The steps described before are needed to complete this activity since playbook takes motd to the remote server, Now let's create a tree as follows:

sorea@vmjump01 /home/sorea/.../ansible/customise_cli$ tree
.
├── customise.yml
├── files
│   └── motd
└── hosts

In our playbook we have changed a part of the last line due to {# #} is a jinja comment tag and it throws errors because comment tag is not close

From
c=${#d[@]};
to
c={{ '${#d[@]}' }};

sorea@vmjump01 /home/sorea/.../ansible/customise_cli$ tail -1 customise.yml
       PS1='\[$green\]\u\[$white\]@\[$red\]\h \[$white\]$(IFS=/ d=($PWD); IFS=\n c={{ '${#d[@]}' }}; if [[ $PWD == $HOME ]]; then echo "~"; elif [ $c -gt 5 ]; then echo "/${d[1]}/${d[2]}/.../${d[$c-2]}/${d\[$c-1\]}"; else echo $PWD; fi)\[$white\]\$ \[$white\]'

Once all files are in place we can run our playbook by executing netx command to get these changes done

sorea@vmjump01 /home/sorea/.../ansible/customise_cli$ ansible-playbook customise.yml -i hosts -k
SSH password:

We will be propmted for a password then will see the progress on the screen, Watch this video for execution details

https://youtu.be/bztt1a0BEy8













Playbook used are already on git:

Friday, November 4, 2016

Ssh authentication using password, public key, and sshpass

Everytime we want/need to get into one linux server we need to authenticate, it could be using a simple password, configuring a public key or using sshpass, below will be describing the process for each one.

First all, we need to know all servers using here are Virtual Machines (VM) listed below, access will be thru ssh (Secure SHell), and all of them are using ssh default port (22).

# virsh list
 Id    Name                           State
----------------------------------------------------
 2     vmjump01                       running
 3     vmdbmongodb01                  running
 4     vmdbmongodb02                  running
 5     vmdbmysql01                    running
 6     vmdbmysql02                    running
 7     vmdbpostgresql01               running
 8     vmdbpostgresql02               running
 9     vmdhcp01                       running
 10    vmdns01                        running
 11    vmdns02                        running
 12    vmemail01                      running
 13    vmemail02                      running

Using a Password

To start with this post we are loging in to one VM using putty from Windows, then will be using this VM (vmjump01) as jump server to reach the rest of the VMs.

In the first red circle we specified our user and the VM's name (user@host), in the second one we used ssh default port and we left ssh radio button checked (because we are using this protocol), Finally click on Open.




















After clicking Open we will be prompted for a password, we type it and hit enter













To jump from one linux server to another one we only need to ssh the server as follows:
* Note: In this case we are using default port, if another one is configured we need to add a parameter -p port_number (e.g. ssh vmdbmysql01 -p 6372)

[sorea@vmjump01 ~]$ ssh vmdbmysql01
The authenticity of host 'vmdbmysql01 (192.168.0.35)' can't be established.
ECDSA key fingerprint is 29:b1:de:00:78:e1:80:08:c7:cb:90:fc:d1:1b:66:2d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'vmdbmysql01,192.168.0.35' (ECDSA) to the list of known hosts.
sorea@vmdbmysql01's password:
[sorea@vmdbmysql01 ~]$

As this is the first time we are trying to login from vmjump01 we will be promted if we want to continue, we typed yes so a new record with vmdbmysql01 inforation is added to known_hosts file as shown below:

[sorea@vmjump01 ~]$ cat .ssh/known_hosts | wc -l
1

If we try once again will be prompted for a password inmediatelly due to the server information was already added so it is a known host now.

[sorea@vmjump01 ~]$ ssh vmdbmysql01
sorea@vmdbmysql01's password:
[sorea@vmdbmysql01 ~]$ uptime
13:55:03 up  1:40,  1 user,  load average: 0.00, 0.01, 0.05

Using Public key

For this step will be describing the process to create a rsa key with 2048 bits (2048 default value for rsa).

Configure key:

After running ssh-keygen command with its parameters, the system will ask us path and passphrase, in this case we only hit enter so the system will take the default information, also we left passphrase empty since we will be running commands remotely and do not want to be prompted for the passphrase each time we need to get into a server.

[sorea@vmjump01 ~]$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/home/sorea/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/sorea/.ssh/id_rsa.
Your public key has been saved in /home/sorea/.ssh/id_rsa.pub.
The key fingerprint is:
57:68:b4:8e:0f:57:68:81:a9:8e:0d:46:a1:b4:81:a9 sorea@vmjump01
The key's randomart image is:
+--[ RSA 2048]----+
|    ..   oo      |
| . ..   o. + ... |
|. E.      + .    |
|o+ +o .  . .     |
|*  .o=  S .      |
|.. ...o  .       |
|  . o    .....S  |
|   .             |
|                 |
+-----------------+

If we set a passphrase and we want to remove/change it only need to execute below command and follow instructions on the screen
ssh-keygen -p

Move public key to target server:

If we try to get into one server will be prompted for a password, so we need to copy our public key to target server:

[sorea@vmjump01 ~]$ssh vmdbmongodb01
sorea@vmdbmongodb01's password:

There is not any problem if we just need to provide the password for one server, what about if we had 20, 200 or even 2000 (so weird ¬¬) ??

[sorea@vmjump01 ~]$ for s in {01..20}; do echo vmdbmongodb$s $(ssh vmdbmongodb0$s "uptime"); done
sorea@vmdbmongodb01's password:
vmdbmongodb01 12:34:48 up 20 min, 0 users, load average: 0.00, 0.04, 0.12
sorea@vmdbmongodb02's password:
...
...

To copy the public key we just need to run ssh-copy-id command followed by the target server and type the password as shown below:

[sorea@vmjump01 ~]$ ssh-copy-id vmdbmongodb01
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
sorea@vmdbmongodb01's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'vmdbmongodb01'"
and check to make sure that only the key(s) you wanted were added.

Then let's try to ssh the target server and will see a password was not requiered anymore for this server


[sorea@vmjump01 ~]$ ssh vmdbmongodb01
[sorea@vmdbmongodb01 ~]$

So now, we do not have any problem runing commands remotely due to system is self-authenticated: 

[sorea@vmjump01 ~]$ for s in {01..20}; do echo vmdbmongodb0$s $(ssh vmdbmongodb$s "uptime"); done
vmdbmongodb01 12:35:40 up 20 min, 0 users, load average: 0.00, 0.03, 0.12
vmdbmongodb02 12:35:41 up 20 min, 0 users, load average: 0.00, 0.03, 0.12
...
...

Using sshpass


To use this option we need to add one line to .bashrc, located at our home directory export SSHPASS=mypasswordhere, the file should look like:

[sorea@vmjump01 ~]$ vi .bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
export SSHPASS=mypasswordhere

Then, we need to refresh our variables within .bashrc by running (source .bashrc)

[sorea@vmjump01 ~]$ source .bashrc  

Let's do some tests
[sorea@vmjump01 ~]$ sshpass -e ssh vmdbpostgresql01
[sorea@vmjump01 ~]$ 

As explained before last command does not do anything since vmdbpostgresql01 is not a known host to avoid this we need to add an option to ssh command (-o StrictHostKeyChecking=no)

[sorea@vmjump01 ~]$ sshpass -e ssh -o StrictHostKeyChecking=no vmdbpostgresql01
Warning: Permanently added 'vmdbpostgresql01,192.168.0.37' (ECDSA) to the list of known hosts.
[sorea@vmdbpostgresql01~]$

Now to get into any reachable server (a local user is needed within target host) we just need to specify a target server and thats it, we have access without being prompted for a password.

To use a different variable than SSHPASS we only need to change export SSHPASS=mypasswordhere to export myPass=mypasswordhere in .bashrc file and refresh variables in our shell enviroment by source .bashrc 

[sorea@vmjump01 ~]$ sshpass -p $myPass ssh -o StrictHostKeyChecking=no vmdbpostgresql01
[sorea@vmdbpostgresql01 ~]$

Notice that SSHPASS variable is sshpass defaults (case sensitive), if we want to use the default one we execute
sshpass -e .......
otherwise
sshpass -p $myPass 

sshpass can be used with other commands too: e.g:
sshpass -e command file server:/path/file

sshpass -e scp file.bkp vmdbpostgresql01:/tmp/file.bkp 
or
sshpass -p $myPass scp file.bkp vmdbpostgresql01:/tmp/file.bkp


Example running commands remotely using sshpass

[sorea@vmjump01 ~]$ for s in $(cat inventory); do; echo $s $(sshpass -e ssh -o StrictHostKeyChecking=no $s "uptime"); done
vmdhcp01 15:05:29 up 2:38, 0 users, load average: 0.00, 0.01, 0.05
vmdns01 15:05:29 up 2:38, 0 users, load average: 0.00, 0.01, 0.05
vmdns02 15:05:30 up 2:38, 0 users, load average: 0.07, 0.03, 0.05
vmemail01 15:05:30 up 2:38, 0 users, load average: 0.07, 0.03, 0.05
vmemail02 15:05:29 up 2:38, 0 users, load average: 0.00, 0.01, 0.05


Note: All the steps described above were using CentOS 7

[sorea@vmjump01 ~]$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)

Wednesday, September 21, 2016

Ansible inventories, Ad-Hoc commands and first playbook

Hi folks,

Welcome to our first post in this one we will be talking about ansible, installation, inventories, ad-hoc commands and a basic playbook, everything will be technical (less theory). Invetories are the base of running ansible playbooks and Ad-Hoc commands.

Installation

We will be using Centos 7 during this post.

$cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)

If you are using an earlier release you will need to install epel release (Extra Packages for Enterprise Linux) by executing below command:

 $sudo yum -y install epel-release

Then to list all repositories with yum to confirm it was installed

$yum repolist

If not, we can continue here:
To make sure let's execute next command to verify if ansible is in our repository

$yum list | grep ansible
ansible.noarch                       2.1.1.0-1.el7          epel
ansible-inventory-grapher.noarch     1.0.1-2.el7            epel
ansible-lint.noarch                  3.1.3-1.el7            epel
ansible-openstack-modules.noarch     0-20179d751a.el7       epel
ansible1.9.noarch                    1.9.6-2.el7            epel
kubernetes-ansible.noarch            0.6.0-0.1.ebd5.el7     epel

In this case will be installing latest version (2.1) by executing:

$yum -y install ansible

 ** Remember to install we need to have enough privileges.

To confirm it was successfully installed execute:

$ansible --version
  ansible 2.1.0.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides

Ansible inventories

Imagine we have below Virtual Machines (for now on VM)

vmdbmongodb01
vmdbmongodb02
vmdbmysql01
vmdbmysql02
vmdbpostgresql01
vmdbpostgresql02
vmdev01
vmdhcp01
vmemail01
vmemail02
vmnagios01
vmnagios02
vmweb01
vmweb02
vmweb03
vmweb04
vmweb05

Basic ansible inventory's structure
We will create a basic inventory using the above list with a title between square brakets []
Our first inventory is completed :D

$cat hosts
[my_servers]
vmdbmongodb01
vmdbmongodb02
vmdbmysql01
vmdbmysql02
vmdbpostgresql01
vmdbpostgresql02
vmdev01
vmdhcp01
vmemail01
vmemail02
vmnagios01
vmnagios02
vmweb01
vmweb02
vmweb03
vmweb04
vmweb05

Inventory divided depending on its function
As we all know every server has a different role, so we need to apply different configuration, ACL's and/or  install different applications depending on its rol, as shown below; servers are grouped by rol:

$cat hosts
[mongo_servers]
vmdbmongodb01
vmdbmongodb02

[mysql_servers]
vmdbmysql01
vmdbmysql02

[postgres_servers]
vmdbpostgresql01
vmdbpostgresql02

[dev_servers]
vmdev01

[dhcp_servers]
vmdhcp01

[mail_servers]
vmemail01
vmemail02

[nagios_servers]
vmnagios01
vmnagios02

[web_servers]
vmweb01
vmweb02
vmweb03
vmweb04
vmweb05

Tips and tricks
Ansible is a powerful configuration management tool, we can use regex to create our inventories, let's improve above inventory.

$cat hosts
[mongo_servers]
vmdbmongodb[01:02]

[mysql_servers]
vmdbmysql[01:02]

[postgres_servers]
vmdbpostgresql[01:02]

[dev_servers]
vmdev01

[dhcp_servers]
vmdhcp01

[mail_servers]
vmemail[01:02]

[nagios_servers]
vmnagios[01:02]

[web_servers]
vmweb[01:05]

Note this inventory will work as good as the one above, When we have thousands of servers will save thousand of lines. e.g. what about if we have from vmserver01 to vmserver150. we can reduce it as
shown below (we will be saving 149 lines, it will make inventory easier to read)

[my_thousand_servers]
vmserver[01:150]

Also if we are using a domain on vmserver servers we can use:

[my_thousand_servers]
vmserver[01:150].mydomain.com


Ad-Hoc commands

This is the first ad-hoc command we are executing and is divided as follows :
ansible Command to execute (obviously)
mongo_servers The part of the inventory we want to execute (from ansible default location, if you would like to execute another inventory just add -i /example/path/my_inventory)
-m ping Standard for  module and ping is the module we are invoking
--ask-pass This line will ask us for our password (the one we use to get into the servers)
























If our command is executed as below, It will trigger errors due to it does not have right privileges to get into those servers (It is not the same ping ansible module that ping linux command!!!)

ansible mongo_servers -m ping















After creating my key and copying to those servers it works fine

ssh-keygen -t rsa
ssh-copy-id vmdbmongodb01
ssh-copy-id vmdbmongodb02










We can execute it many ways, for example, using
-u User to exectue command as (sorea)
--private-key Path where the selectioned user's key is located

On the underlined part is taken by default the sorea's keys

Note: This last pasrt is being executed with a different user (root) 



First playbook

This playbook is to install mongodb package using ansible yum module, for achieve this step we use below page as reference:

As pre-step we needed to create a new repo using ansible yum module due to all mongodb packages are located there:

Below playbooks can be found in:
https://github.com/soreaort/all-it/tree/master/ansible/mongdb_installation

$ cat create_mongodb_repo.yml
---
- name: Create MongoDB repo mongodb-org-3.2
  hosts: mongo_servers
  become: yes
  become_user: root
  tasks:
    - name: Creating repo
      yum_repository:
       name: MongoDB-mongodb-org-3.2
       description: mongodb-org-3.2
       baseurl: https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.2/x86_64/
       gpgcheck: yes
       enabled: yes
       gpgkey: https://www.mongodb.org/static/pgp/server-3.2.asc

And was executed as follows: (by default using our keys)

$ ansible-playbook create_mongodb_repo.yml

The main playbook is to install mongodb-org package from mongodb repository (already created)

$ cat install_mongodb.yml
---
- name: Install and enable MongoDB
  hosts: mongo_servers
  become: yes
  become_user: root
  tasks:
    - name: Installing MongoDB
      yum: name=mongodb-org state=present enablerepo=MongoDB-mongodb-org-3.2
    - name: Staring and enabling MongoDB
      service: name=mongod state=started enabled=yes

Executed as:
$ ansible-playbook install_mongodb.yml













Final step is validating everything is working fine on target servers:

[sorea@vmdbmongodb02 ~]$ service mongod status