Quantcast
Channel: LINUX HOWTO – LinOxide
Viewing all 382 articles
Browse latest View live

How to Setup Lighttpd Web server on Ubuntu 15.04 / CentOS 7

$
0
0

Lighttpd is an open source web server which is secure, fast, compliant, and very flexible and is optimized for high-performance environments. It uses very low memory compared to other web servers , small CPU load and speed optimization making it popular among the server for its efficiency and speed. Its advanced feature-set (FastCGI, CGI, Auth, Output-Compression, URL-Rewriting and many more) makes lighttpd the perfect webserver-software for every server that suffers load problems.

Here are some simple easy setups on how we can setup Lighttpd web server on our machine running Ubuntu 15.04 or CentOS 7 linux distributions.

Installing Lighttpd

Installing using Package Manager

Here, we'll install Lighttpd using package manager as its the easiest method to install it. So, we can simply run the following command under sudo mode in a terminal or console to install Lighttpd.

CentOS 7

As lighttpd is not available in the official repository of CentOS 7, we'll need to install epel additional repository to our system. To do so, we'll need to run the following yum command.

# yum install epel-release

Then, we'll gonna update our system and proceed towards the installation of lighttpd.

# yum update

# yum install lighttpd

Install Lighttpd Centos

Ubuntu 15.04

Lighttpd is available on the official repository of Ubuntu 15.04 so, we'll simply update our local repository index and then go for the installation of lighttpd using apt-get command.

# apt-get update

# apt-get install lighttpd

install lighttpd ubuntu

Installing from Source

If we wanna install lighttpd from the latest version of source code ie 1.4.39, we'll need to compile the source code and install it into our system. First of all, we'll need to install the dependencies required to compile it.

# cd /tmp/
# wget http://download.lighttpd.net/lighttpd/releases-1.4.x/lighttpd-1.4.39.tar.gz

After its downloaded, we'll need to extract it the tarball by running the following.

# tar -zxvf lighttpd-1.4.39.tar.gz

Then, we'll compile it by running the following commands.

# cd lighttpd-1.4.39
# ./configure
# make

Note: In this tutorial, we are installing lighttpd with its standard configuration. If you wanna configure beyond the standard configuration and want to install more features like support for SSL, mod_rewrite, mod_redirect then you can configure.

Once, its compiled, we'll install it in our system.

# make install

Configuring Lighttpd

If we need to configure our lighttpd web server further as our requirements, we can make changes to the default configuration file ie /etc/lighttpd/lighttpd.conf . As wee'll go with the default configuration here in this tutorial, we'll not gonna make changes to it. If we had made any changes and we wanna check for errors in the config file, we'll need to run the following command.

# lighttpd -t -f /etc/lighttpd/lighttpd.conf

On CentOS 7

If we are running CentOS 7, we'll need to create a new directory for our webroot defined in our lighttpd's default configuration ie /src/www/htdocs/ .

# mkdir -p /srv/www/htdocs/

Then, we'll copy the default welcome page from /var/www/lighttpd/ directory to the above created directory.

# cp -r /var/www/lighttpd/* /srv/www/htdocs/

Starting and Enabling Services

Now, we'll gonna restart our  database server by executing the following systemctl command.

# systemctl start lighttpd

Then, we'll enable it to start automatically in every system boot.

# systemctl enable lighttpd

Allowing Firewall

To allow our webpages or websites running lighttpd web server on the internet or inside the same network, we'll need to allow port 80 from the firewall program. As both CentOS 7 and Ubuntu 15.04 are shipped with systemd as the default init system, we will have firewalld installed as a firewall solution. To allow port 80 or http service, we'll need to run the following commands.

# firewall-cmd --permanent --add-service=http

success

# firewall-cmd --reload

success

Accessing Web Server

As we have allowed port 80 which is the default port of lighttpd, we should be able to access lighttpd's welcome page by default. To do so, we'll need to point our browser to the ip address or domain of our machine running lighttpd according to our configuration. In this tutorial, we'll point our browser to http://lighttpd.linoxide.com/ as we have pointed our sub-domain to its ip address. On doing so, we'll see the following welcome page in our browser.

Lighttpd Welcome Page

Further, we can add our website's files to the webroot directory and remove the default lighttpd's index file to make our static website live to the internet.

If we want to run our PHP application in our lighttpd webserver, we'll need to follow the following steps.

Installing PHP5 Modules

Once our lighttpd is installed successfully, we'll need to install PHP and some PHP modules to run PHP5 scripts in our lighttpd web server.

On Ubuntu 15.04

# apt-get install php5 php5-cgi php5-fpm php5-mysql php5-curl php5-gd php5-intl php5-imagick php5-mcrypt php5-memcache php-pear

On CentOS 7

# yum install php php-cgi php-fpm php-mysql php-curl php-gd php-intl php-pecl-imagick php-mcrypt php-memcache php-pear lighttpd-fastcgi

Configuring Lighttpd with PHP

To make PHP work with lighttpd web server, we'll need to follow the following methods respect to the distribution we are running.

On CentOS 7

First of all, we'll need to edit our php configuration ie /etc/php.ini and uncomment a line cgi.fix_pathinfo=1 using a text editor.

# nano /etc/php.ini

After its done, we'll need to change the ownership of PHP-FPM process from apache to lightpd. To do so, we'll need to open the configuration file /etc/php-fpm.d/www.conf using a text editor.

# nano /etc/php-fpm.d/www.conf

Then, we'll append the file with the following configurations.

user = lighttpd
group = lighttpd

Once done, we'll need to save the file and exit the text editor. Then, we'll need to include fastcgi module from /etc/lighttpd/modules.conf configuration file.

# nano /etc/lighttpd/modules.conf

Then, we'll need to uncomment the following line by removing # from it.

include "conf.d/fastcgi.conf"

At last, we'll need to configure our fastcgi configuration file using our favorite text editor.

# nano /etc/lighttpd/conf.d/fastcgi.conf

Then, we'll need to add the following lines at the end of the file.

fastcgi.server += ( ".php" =>
((
"host" => "127.0.0.1",
"port" => "9000",
"broken-scriptfilename" => "enable"
))
)

After its done, we'll save the file and exit the text editor.

On Ubuntu 15.04

To enable fastcgi with lighttpd web server, we'll simply need to execute the following commands.

# lighttpd-enable-mod fastcgi

Enabling fastcgi: ok
Run /etc/init.d/lighttpd force-reload to enable changes

# lighttpd-enable-mod fastcgi-php

Enabling fastcgi-php: ok
Run /etc/init.d/lighttpd force-reload to enable changes

Then, we'll reload our lighttpd by running the following command.

# systemctl force-reload lighttpd

Testing if PHP is working

In order to see if PHP is working as expected or not, we'll need to create a new php file under the webroot of our lighttpd web server. Here, in this tutorial we have /var/www/html in Ubuntu and /srv/www/htdocs in CentOS as the default webroot so, we'll create a info.php file under it using a text editor.

On CentOS 7

# nano  /var/www/info.php

On Ubuntu 15.04

# nano /srv/www/htdocs/info.php

Then, we'll simply add the following line into the file.

<?php phpinfo(); ?>

Once done, we'll simply save the file and exit from the text editor.

Now, we'll point our web browser to our machine running lighttpd using its ip address or domain name with the info.php file path as http://lighttpd.linoxide.com/info.php If everything was done as described above, we will see our PHP information page as shown below.

phpinfo lighttpd

 Conclusion

Finally we have successfully installed the world's lightweight, fast and secure web server Lighttpd in our machine running CentOS 7 and Ubuntu 15.04 linux distributions. Once its ready, we can upload our website files into our web root, configure virtual host, enable ssl, connect database, run web apps and much more with our lighttpd web server. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)

The post How to Setup Lighttpd Web server on Ubuntu 15.04 / CentOS 7 appeared first on LinOxide.


How to Install Interworx on CentOS 7

$
0
0

InterWorx is a hosting control panel purely based on Linux. It relies on the RPM package system for distribution of InterWorx itself, as well as for handling various software packages for web hosting platforms. Hence, It strongly recommends to have an RPM-compatible Linux distribution for installation.

In this article, I'm explaning about the installation of this control panel on a CentOS 7 server which is a RPM compatible Linux distribution. Previously the installation on this OS wasn't supported. But I believe you guys will be happy to know that the new Interworx release version 5.1.5 is designed to support the latest RHEL/CentOS distribution.
InterWorx Control Panel runs on a variety of systems and hardware including Virtual Private Server (VPS) systems supported on OpenVZ, Virtuazzo, Xen and VMware. It is also supports CloudLinux platforms.

Minimum hardware requirements

  • Memory : at least 256 MB; 512MB is recommended
  • CPU : Pentium III 866 CPU
  • Disk Space : 512 MB; 1GB is recommeded

Pre-requisites

  • A linux server or a supported VPS systems as mentioned above with atleast minimum hardware requirements is needed.
  • A clean install of the RPM-compatible Linux distribution supported OS is needed.
  • A valid and active InterWorx Control Panel license key is required
  • UIDs 102 - 107 and GIDs 102 & 103 should be free to use.

Installation steps

1. Login to the server as root.

2. Download and run the installer script

sh <((curl -sL interworx.com/inst.sh))

This installation script will prompt you to proceed further. If you don't want to be prompted after each configuration step, you can run the downloader script as below:

sh <((curl -sL interworx.com/inst.sh)) -l

I run the initial installer script to check the installation procedure one by one.

You will receive this prompt on initial setup. You can verify your OS settings ans press "enter" to proceed.

=-=-=-=-= Installing InterWorx-CP =-=-=-=-=-

This script will install InterWorx-CP on your system.
Please make sure that you have backed up any critical
data!

This script may be run multiple times without any
problems. This is helpful if you find an error
during installation, you can safely ctrl-c out of
this script, fix the error and re-start the script.

Details of this installation will be logged in iworx-install.log

TARGET : CentOS Linux release 7.0.1406 (Core)
PLATFORM : GNU/Linux
PROCESSOR : x86_64
RPM TARGET: rhe7x
RPM DISTRO: cos7x
RPM DIR : /usr/src/redhat/RPMS
SRPM DIR : /usr/src/redhat/SRPMS
SRPM HOST : updates.interworx.com
IWORX REPO: release

Press <enter> to begin the install...

Then it will proceed with the installation and remove the following packages listed below if it is pre-existed in the server.

InterWorx-CP needs to remove some packages that may conflict
The following packages will be removed (if they exist)
STATUS: - bind
STATUS: - redhat-config-bind
STATUS: - caching-nameserver
STATUS: - sendmail
STATUS: - postfix
STATUS: - exim
STATUS: - mutt
STATUS: - fetchmail
STATUS: - spamassassin
STATUS: - redhat-lsb
STATUS: - evolution
STATUS: - mod_python
STATUS: - mod_auth_mysql
STATUS: - mod_authz_ldap
STATUS: - mod_auth_pgsql
STATUS: - mod_auth_kerb
STATUS: - mod_perl
STATUS: - mdadm
STATUS: - dovecot
STATUS: - vsftpd
STATUS: - httpd-tools
Is this ok? (Y/n):

After proceeding with this prompt, it will complete the installation and you will get a message which will direct you to the control panel access.

-=-=-=-=-= Installation Complete! Next Step: License Activation! =-=-=-=-=-

To activate your license, go to
http://your-ip:2080/nodeworx/ or
https://your-ip:2443/nodeworx/
Also, check out http://www.interworx.com for news, updates, and support!

-=-=-=-=-= THANK YOU FOR USING INTERWORX! =-=-=-=-=-

You can either activate the license via CLI or you can activate it via browser by logging into the control panel.

Command Line license activation

You can login to the server as root user and run the script as below:

[root@server1 ~]# /home/interworx/bin/goiworx.pex

You will be prompted to provide your license key which will look like this "INTERWORX_XXXXXXXXXX". Enter these details to activate the license via CLI. Make sure, the port 2443 is opened incase your server is firewalled.

License activation from Control Panel

You can login to the control panel using the URL http://your-ip:2080/nodeworx/ or https://your-ip:2443/nodeworx/. Now you will be directed to the login session as below:

interworx installation

Enter the details to proceed. Now you've done with the activation.

Once the license is activated, it will automatically configure the settings for the panel. Wait until the progress bar completes the setup.

interworx_installation_complete

You need to agree the license agreement to proceed and set the DNS to completes the Panel setup.

DNS_intrwx

Now you'll have a Server Manager Panel (Nodeworx) and a Website Manager Panel (Siteworx). This is how you can access it.

Nodeworx:

nodeworx
You can manage your server from Nodeworx and manage your individual domain using Siteworx.

Siteworx:

Click the Siteworx icon next to the domain to access its siteworx panel.

siteworx1

siteworx

How to create a domain in Interworx

You can login to the Nodeworx and navigate through the options Siteworx >> Accounts >> Add Siteworx account to proceed.

account_creation
You can modify its account settings anytime, with the "Edit" option on the left hand side of the created account.

Advantages of Interworx

  • Provides the best performance with Apache 2.4.10 installed with 5.5.44-MariaDB and PHP 5.4.16 with primary installation
  • Provides SPAM filtering and Virus filtering interfaces which can be managed from Panel
  • Provides SNI support
  • Provides high-Availability Load Balancing At A Fraction Of The Price

I hope you guys enjoyed reading this article. It is a very light control panel which can be installed within a few minutes and it has a user friendly interface to manage the accounts efficiently. Since it is an RPM based Control Panel, all software installations/upgrades are independent and can be carried out easily,

I appreciate your valuable comments on this :).

Thank you and have a good day!!

The post How to Install Interworx on CentOS 7 appeared first on LinOxide.

How to Perform Network Sniffing with Tshark

$
0
0

This time let's talk about Tshark, a powerful command-line network analyzer that comes with the well known Wireshark. It works like Tcpdump, but with powerful decoders and filters, capable to capture information of different network layers or protocols, and display in different format and layouts.

Used to analyze network traffic in real-time or read pcap/pcapng files to look for information, digging into details of connections, helping to identify network anomalies, problems or trends. Helping network and security professionals to be ahead of the user and their needs,  prevent  problems and security threats or solve them before it is too late.

sample Tshark session

sample Tshark session

Table of  contents

  • Intro
  • Filters
    • Capture filters
    • Display filters
  • Output
    • formatting
    • statistics

Intro

Why Tshark

Tshark works like tcpdump, ngrep and others, however as it provides the protocol decoding features of Wireshark, you will be much more confortable reading its output as it makes network analysis on terminal more human.

Tshark is a terminal application capable of doing virtually anything you do with Wireshark, but with no need for clicks or screens. This makes it great when you need to do some scripting, such as cron scheduled captures, send the data to sed, awk, perl, mail, database or so.

Tshark is a great fit for remote packet capture, on devices such as gateways, you just need to login ssh and use as you would do  on localhost.

Capture, read and write packets

In our first run on Tshark try to call it with no parameters, this will start capturing packets on the default network interface.

tshark

There may be more than one interface on your machine and you may need to specify which one you want use. To get a list of available interfaces use the -D

tshark -D

Once you find out which interface to use, call Tshark with the -i option and  an interface name or number reported by the -D option.

tshark -i eth1

tshark -i 3

Now that you can capture the packets over the network, you may want to save them for later inspection, this can be done with the -w option.

tshark -i wlan0 -w /tmp/traffic.pcap

To analyze the packets from the previously saved traffic.pcap file, use the -r option, this will read packets from the instead of a network interface. Note also that you don't need superuser rights to read from files.

tshark -r /tmp/traffic.pcap

By default name resolution is performed, you may  use -n and disable this for a best performance in some cases.

tshark -n

Filters

If you are on a busy network, you may have screen like on the Matrix movies, with all kind information, flowing too fast and almost impossible to read. To solve this problem Tshark provides two types of filters that will let you see beyond the chaos.

Capture filters

You can use the traditional pcap/bpf  filter to select what to capture from your interface.

Search for packets relaated to the 192.168.1.100 host on port 80 or 53.

tshark -i 2 -f "host 192.168.1.100 and (dst port 53 or 80)"

Ignore packets on multicast and broadcast domains

tshark -i eth3 -f "not broadcast and not multicast"

Display filters

Display  filters are  set with -Y  and have the following  syntax

To see all connections from  host 192.168.1.1

tshark -i eth0 -Y "ip.addr==192.168.1.1"

Display HTTP requests on TCP port 8800

tshark -i eth0 -Y "tcp.port== 8800 and http.request"

Display all but ICMP and ARP packets

tshark -i eth0 -Y "not arp or icmp"

Formatting

Sometimes you need more or less information from the network packets to be displayed, also you may need to specify how/where to show this information. The following options let you do exactly this.

Use -V to make Tshark verbose and display details about packets, from frame number, protocol field, to packet data or flags.

The -O option is much like the -V option, however this will show details of a specific protocol

tshark -i eth2 -O  icmp

Use the -T option to output data in different formats, this can be very handy when you need a specific format to your analysis. such as when you are using Tshark to fill some database

tshark -i wlan0 -O icmp -T fields -e frame.number -e data

If you choose fields to the -T option, you must set the -e option at least once, this will tell Tshark wich field of information to display, you can use this option multiple times to display more fields

tshark -r nmap_OS_scan_succesful -Y "tcp.ack" -T fields -e frame.number -e ip.src -e tcp.seq -e tcp.ack -e tcp.flags.str -e tcp.flags -e tcp.analysis.acks_frame

Nmap standard scan

Nmap standard scan

To get a complete list of the possible fields to use with the -e flag use -G option as below

tshark -G fields | less

Further formatting can be done with the -E flag, you can show/hide headers, set quote character  and more.

Look at the  following command that combine these options to produce a CSV file.

tshark -r captured.cap -T fields -e frame.number -e frame.encap_type -e frame.protocols -e frame.len -e ip.addr -E separator=, -E quote=d > outfile.csv

Tshark output of selected fields in CSV format

Tshark output of selected fields in CSV format

Statistics

Sometimes you may want an analytical report, in this case use -z option followed by one of the many report types available.

Report on SMB , DNS and IP protocols.

tshark -i ens1 -z smb,srt -z dns,tree -z http,tree -z hosts

Tshark SMB statistics

Tshark SMB statistics

List of IP conversations

tshark -r cap.pcap -z conv,ip

IP conversations report

IP conversations report

To get a list of the available reports try the -z help option.

tshark -z help

Secure sockets

You can also analyze encrypted connections like SSL, the following example is showing the HTTP within the secure socket layer

Details of an HTTP connection over SSL

Details of an HTTP connection over SSL

Here we are displaying packets on the TCP port 443, telling Tshark to be verbose with the HTTP protocol, do segmentation on SSL, search the private key on the PEM formatted server-x.key file and dump debug information on the debug-ssl.log file.

tshark -r encrypted-packets.pcap -Y "tcp.port == 443" -O http \

-o "ssl.desegment_ssl_records: TRUE" \

-o "ssl.desegment_ssl_application_data: TRUE" \

-o "ssl.keys_list: 127.0.0.1,443,http,server-x.key" \

-o "ssl.debug_file: debug-ssl.log"

Conclusion

This is how to start using Tshark to sniff  out packets from your networks,  help you to figure out where to look when problems arise, debug your network services, inspect security and  analyze the general health.

For more information take a look the links below

Thanks for reading!

The post How to Perform Network Sniffing with Tshark appeared first on LinOxide.

How to Install Sysdig System Diagnosing Tool on Ubuntu 15 / CentOS 7

$
0
0

Hello and welcome to our todays article on Linux system exploration and troubleshooting tool Sysdig with first class support for containers. Sysdig capture system state and activity from a running Linux instance, then save, filter and analyze. You can use this awesome tool as a replacement of many Linux troubleshooting commands like top, lsof, strace, iostat, ps, etc. It also combines the benefits of many utilities such as strace, tcpdump, and lsof into one single application which is packed with a set of scripts called Chisels that make it easier to extract useful information and do troubleshooting.

In this article we’ll show you its installation steps and basic usage of sysdig to perform system monitoring and troubleshooting on Linux CentOS 7 and Ubuntu 15 Operating system.

1) Installing Sysdig on Ubuntu 15:

Sysdig included the latest versions of Debian , RHEL and Container based OS; however, it is updated with new functionality all the time. We are going to install Sysdig using 'apt' command, but first we need to setup the apt repository maintained by Draios by running the following 'curl' commands with root user.

Using below commands will trust the Draios GPG key and configure the apt repository.

# curl -s https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public | apt-key add -

# curl -s -o /etc/apt/sources.list.d/draios.list http://download.draios.com/stable/deb/draios.list

Now you need to update the package list by executing the following command.

# apt-get update

installing sysdig

Once your system update is complete you need to install the kernel headers package using the command as shown below.

# apt-get -y install linux-headers-$(uname -r)

Now you can install sysdig on ubuntu using the following command.

# apt-get -y install sysdig

Sysdig installation

2) Installing Sysdig on CentOS 7

The installation process on the CentOS 7 is similar to the one that we have performed for Ubuntu server but you need to repeat the same step by setting up the yum repository that will use its own key to verify the authenticity of the package.

Let's run the following command to use the 'rpm' tool with the '--import' to manually add the Draios key to your RPM key.

# rpm --import https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public

After this download the Draios repository and configure yum to use it on your CentOS 7 server.

# curl -s -o /etc/yum.repos.d/draios.repo http://download.draios.com/stable/rpm/draios.repo

Now you need to update the package list by executing the following command before starting installation of Sysdig package.

# yum update

Sysdig Repo

The EPEL repository is needed in order to download the Dynamic Kernel Module Support (DKMS) package used by sysdig tool. So, the following below commands to enable EPEL repository.

# yum -y install epel-release

Enable EPEL Repo

Now install the kernel headers in order to setup sysdig-probe module and then flow the command to install the Sysdig package on the server.

# yum install kernel-devel-$(uname -r)

# yum install sysdig

Installing Sysdig

3) Using Sysdig

After successful installation of sysdig tool, now we will show you some of its most useful examples to use this for troubleshooting your system. The simplest and easiest method to use sysdig is by invoking it without any argument as shown below.

# sysdig

By default, sysdig prints the information for each captured event on a single line in the format of its event number, event time, event cpu number, name of the process (PID), event direction for out, event type and event arguments.

Using Sysdig

The output is so much huge and mostly not very useful by itself, so you can write the output of the Sysdig in a file by using the '-w' flag and specifying the file name in '.dump' as shown in below command.

# sysdig -w result.dump

Ten run the following command with parameter '-r' to read the output from the saved file.

# sysdig -r result.dump

Sysdig Filters

You can use filters that allow you to filter the output of sysdig results to specific information. run the following command to find a list of available filters as shown.

# sysdig -l

----------------------
Field Class: fd

fd.num the unique number identifying the file descriptor.
fd.cport for TCP/UDP FDs, the client port.
fd.rproto for TCP/UDP FDs, the remote protocol.

----------------------
Field Class: process

proc.pid the id of the process generating the event.
proc.name the name (excluding the path) of the executable generating the
event.
proc.args the arguments passed on the command line when starting the proc
ess generating the event.
proc.env the environment variables of the process generating the event.
proc.cmdline full process command line, i.e. proc.name + proc.args.
proc.exeline full process command line, with exe as first argument, i.e. pro
c.exe + proc.args.
proc.cwd the current working directory of the event.
proc.duration number of nanoseconds since the process started.
proc.fdlimit maximum number of FDs the process can open.
proc.fdusage the ratio between open FDs and maximum available FDs for the pr
ocess.
.
thread.pfminor number of minor page faults since thread start.
thread.ismain 'true' if the thread generating the event is the main one in th
e process.

So you can filter the results using its powerful filtering system. You can use the “proc.name” filter to capture all of the sysdig events for a specific process.
Let's for example filter the process of 'MySQLD' using proc.name argument using below command.

# sysdig -r result.dump proc.name=mysqld

140630 02:20:30.848284977 2 mysqld (2899) io_getevents
140632 02:20:30.848289674 2 mysqld (2899) > switch next=2894(mysqld) pgft_maj=0 pgft_min=1 vm_size=841372 vm_rss=85900 vm_swap=0
140633 02:20:30.848292784 2 mysqld (2894) io_getevents
140635 02:20:30.848297142 2 mysqld (2894) > switch next=2901(mysqld) pgft_maj=0 pgft_min=4 vm_size=841372 vm_rss=85900 vm_swap=0
140636 02:20:30.848300414 2 mysqld (2901) io_getevents
140638 02:20:30.848307954 2 mysqld (2901) > switch next=0 pgft_maj=0 pgft_min=1 vm_size=841372 vm_rss=85900 vm_swap=0
140640 02:20:30.849340499 1 mysqld (2900) io_getevents
140642 02:20:30.849348907 1 mysqld (2900) > switch next=2895(mysqld) pgft_maj=0 pgft_min=1 vm_size=841372 vm_rss=85900 vm_swap=0
140643 02:20:30.849357633 1 mysqld (2895) io_getevents
140645 02:20:30.849362258 1 mysqld (2895) > switch next=26329(tuned) pgft_maj=0 pgft_min=1 vm_size=841372 vm_rss=85900 vm_swap=0
140702 02:20:30.995763869 1 mysqld (2898) io_getevents
140704 02:20:30.995777232 1 mysqld (2898) > switch next=2893(mysqld) pgft_maj=0 pgft_min=1 vm_size=841372 vm_rss=85900 vm_swap=0
140705 02:20:30.995782563 1 mysqld (2893) io_getevents
140707 02:20:30.995795720 1 mysqld (2893) > switch next=0 pgft_maj=0 pgft_min=3 vm_size=841372 vm_rss=85900 vm_swap=0
140840 02:20:31.204456822 1 mysqld (2933) futex addr=7F1453334D50 op=129(FUTEX_PRIVATE_FLAG|FUTEX_WAKE) val=1
140842 02:20:31.204464336 1 mysqld (2933) futex addr=7F1453334D8C op=393(FUTEX_CLOCK_REALTIME|FUTEX_PRIVATE_FLAG|FUTEX_WAIT_BITSET) val=12395
140844 02:20:31.204569972 1 mysqld (2933) > switch next=3920 pgft_maj=0 pgft_min=1 vm_size=841372 vm_rss=85900 vm_swap=0
140875 02:20:31.348405663 2 mysqld (2897) io_getevents

To filter the live process of 'sshd' you can use following command with proc.name argument.

# sysdig proc.name=sshd

Sysdig Filter

Network and System Diagnosing with Sysdig

To see the top processes in terms of network bandwidth usage run the below sysdig command.

# sysdig -c topprocs_net

Bytes Process PID
--------------------------------------------------------------------------------
304B sshd 3194

To capture all processes that open a specific file, use below command.

# sysdig fd.name=/var/log

Capture Process

In order to capturing all processes that open a specific file system you can use the following command. Use comparison operators with filters such as contains, =, !=, =, . You will see that filters can be used for both reading from a file or the live event stream.

# sysdig fd.name contains /etc

Capture All processes

Using Chisels in Sysdig

Sysdig’s chisels are little scripts that analyze the sysdig event stream to perform useful actions. If you’ve used system tracing tools like dtrace, you’re probably familiar with running scripts that trace OS events. Chisels work well on live systems, but can also be used with trace files for offline analysis.

To get the list of available chisels, just type the following command to get a short description for each of the available chisels.

# sysdig -cl

Sysdig Chipsels

To run one of the chisels, you use the '–c' flag. For instance, let’s run the topfiles_bytes chisel as shown below.

# sysdig -c topfiles_bytes

Sysdig topfiles

Or if you want to see the top files in a specific folder then use below command.

# sysdig -c topfiles_bytes "fd.name contains /root"

To see the top files by a specific user use below.

# sysdig -c topfiles_bytes "user.name=admin"

Conclusion

Thank you for reading this detailed article and I hope you have found this much helpful as your favorite system and network diagnosing tool. There are still a lot more features that you can explore using sysdig. Don't forget to share with us about your finding and leave us your valuable comments.

The post How to Install Sysdig System Diagnosing Tool on Ubuntu 15 / CentOS 7 appeared first on LinOxide.

How to Install PrestaShop on CentOS 7

$
0
0

PrestaShop is the most powerful, dynamic and fully-featured free eCommerce software enriched with innovative tools. It is used by more than 2,50,000 people around the world for making their online stores at no cost. It's been used widely across the globe due to its simplicity and efficiency.

If you're planing to start with an online webstore, then you're on the right place. In this article, I'm providing the guidelines on how I installed PrestaShop on my CentOS 7 server to build up my online store.

Pre-requisites

  •  Disable Selinux
  • Install the LAMP stack
  • Create a Database/User
  • Confirm the installation of the PHP modules GD, Mcrypt, Mbstring and PDO MySQL

1. Disable Selinux

Need to edit the selinux configuration file located at : /etc/selinux/config

Modify the SELINUX parameter to disabled and reboot the server.

2. Install the LAMP stack

I've set a proper hostname for my server and start with the LAMP installation. Firstly, install Apache.

[root@server1 ~]# yum install httpd -y

This will install all the required Apache packages. Make sure it is enabled and working in the server.

root@server1 ~]# systemctl enable httpd
ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service'

[root@server1~]# systemctl status httpd.service
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled)
Active: active (running) since Tue 2016-02-23 09:18:28 UTC; 2s ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 15550 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
Main PID: 15561 (httpd)
Status: "Processing requests..."
CGroup: /system.slice/httpd.service
├─15561 /usr/sbin/httpd -DFOREGROUND
├─15562 /usr/sbin/httpd -DFOREGROUND
├─15563 /usr/sbin/httpd -DFOREGROUND
├─15564 /usr/sbin/httpd -DFOREGROUND
├─15565 /usr/sbin/httpd -DFOREGROUND
└─15566 /usr/sbin/httpd -DFOREGROUND

Now create the VHOST for the domain which we're planning to install Prestashop. I'm installing Prestashop for my domain saheetha.com.
Here is my Vhost for the domain. Make sure you create the document root and log folders, here it is /var/www/saheetha.com/public_html/ and /var/www/saheetha.com/logs/ before restarting the Apache.

[root@server1 ~]# cat /etc/httpd/conf.d/vhost.conf
NameVirtualHost *:80
<VirtualHost 139.162.54.130:80>
ServerAdmin webmaster@saheetha.com
ServerName saheetha.com
ServerAlias www.saheetha.com
DocumentRoot /var/www/saheetha.com/public_html/
ErrorLog /var/www/saheetha.com/logs/error.log
CustomLog /var/www/saheetha.com/logs/access.log combined
</VirtualHost>

Now install MySQL, I'm installing MySQL 5.5. Download your MySQL Community Repository in your Linux distribution. I downloaded the latest MySQL repo. And installed MySQL 5.5 in my server. Please see the steps I did to choose my required version.

[root@server1 ~]# wget http://dev.mysql.com/get/mysql57-community-release-el7-7.noarch.rpm
[root@server1 ~]# yum localinstall mysql57-community-release-el7-7.noarch.rpm
root@server1 ~]# yum install -y yum-utils *//Install the yum-utility packages //*

[root@server1 ~]# yum repolist enabled | grep "mysql.*-community.*" *//Checked the enabled repo before installation //*
mysql-connectors-community/x86_64 MySQL Connectors Community 17
mysql-tools-community/x86_64 MySQL Tools Community 31
mysql57-community/x86_64 MySQL 5.7 Community Server 56

[root@server1 ~]# yum-config-manager --disable mysql57-community *//Disabling MySQL 5.7 repo from installing*//

Loaded plugins: fastestmirror
=========================================================== repo: mysql57-community ===========================================================
[mysql57-community]
async = True
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/7
baseurl = http://repo.mysql.com/yum/mysql-5.7-community/el/7/x86_64/
cache = 0
cachedir = /var/cache/yum/x86_64/7/mysql57-community
check_config_file_age = True
cost = 1000
deltarpm_metadata_percentage = 100
deltarpm_percentage =
enabled = 0
enablegroups = True
exclude =
failovermethod = priority
gpgcadir = /var/lib/yum/repos/x86_64/7/mysql57-community/gpgcadir
gpgcakey =
gpgcheck = True
gpgdir = /var/lib/yum/repos/x86_64/7/mysql57-community/gpgdir
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
hdrdir = /var/cache/yum/x86_64/7/mysql57-community/headers
http_caching = all
includepkgs =
ip_resolve =
keepalive = True
keepcache = False
mddownloadpolicy = sqlite
mdpolicy = group:small
mediaid =
metadata_expire = 21600
metadata_expire_filter = read-only:present
metalink =
minrate = 0
mirrorlist =
mirrorlist_expire = 86400
name = MySQL 5.7 Community Server
old_base_cache_dir =
password =
persistdir = /var/lib/yum/repos/x86_64/7/mysql57-community
pkgdir = /var/cache/yum/x86_64/7/mysql57-community/packages
proxy = False
proxy_dict =
proxy_password =
proxy_username =
repo_gpgcheck = False
retries = 10
skip_if_unavailable = False
ssl_check_cert_permissions = True
sslcacert =
sslclientcert =
sslclientkey =
sslverify = True
throttle = 0
timeout = 30.0
ui_id = mysql57-community/x86_64
ui_repoid_vars = releasever,
basearch
username =

[root@server1 ~]# yum-config-manager --enable mysql55-community *//Enabling the MySQL 5.5 Repo from the Yum repository //*
Loaded plugins: fastestmirror
=========================================================== repo: mysql55-community ===========================================================
[mysql55-community]
async = True
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/7
baseurl = http://repo.mysql.com/yum/mysql-5.5-community/el/7/x86_64/
cache = 0
cachedir = /var/cache/yum/x86_64/7/mysql55-community
check_config_file_age = True
cost = 1000
deltarpm_metadata_percentage = 100
deltarpm_percentage =
enabled = 1
enablegroups = True
exclude =
failovermethod = priority
gpgcadir = /var/lib/yum/repos/x86_64/7/mysql55-community/gpgcadir
gpgcakey =
gpgcheck = True
gpgdir = /var/lib/yum/repos/x86_64/7/mysql55-community/gpgdir
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
hdrdir = /var/cache/yum/x86_64/7/mysql55-community/headers
http_caching = all
includepkgs =
ip_resolve =
keepalive = True
keepcache = False
mddownloadpolicy = sqlite
mdpolicy = group:small
mediaid =
metadata_expire = 21600
metadata_expire_filter = read-only:present
metalink =
minrate = 0
mirrorlist =
mirrorlist_expire = 86400
name = MySQL 5.5 Community Server
old_base_cache_dir =
password =
persistdir = /var/lib/yum/repos/x86_64/7/mysql55-community
pkgdir = /var/cache/yum/x86_64/7/mysql55-community/packages
proxy = False
proxy_dict =
proxy_password =
proxy_username =
repo_gpgcheck = False
retries = 10
skip_if_unavailable = False
ssl_check_cert_permissions = True
sslcacert =
sslclientcert =
sslclientkey =
sslverify = True
throttle = 0
timeout = 30.0
ui_id = mysql55-community/x86_64
ui_repoid_vars = releasever,
basearch
username =

[root@localhost ~]# yum repolist enabled | grep "mysql.*-community.*" *//Confirm the enabled MySQL repo versions //*
mysql-connectors-community/x86_64 MySQL Connectors Community 17
mysql-tools-community/x86_64 MySQL Tools Community 31
mysql55-community/x86_64 MySQL 5.5 Community Server 199
Now install the MySQL 5.5 from the Repo.

[root@server1~]# yum install mysql-community-server

After completing with the installation, start the MySQL service and confirm its status.

[root@server1 ~]# service mysqld start
Redirecting to /bin/systemctl start mysqld.service
[root@server1 ~]#
[root@server1 ~]#
[root@server1 ~]# systemctl status mysqld.service
mysqld.service - MySQL Community Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled)
Active: active (running) since Tue 2016-02-23 09:27:44 UTC; 8s ago
Process: 15717 ExecStartPost=/usr/bin/mysql-systemd-start post (code=exited, status=0/SUCCESS)
Process: 15664 ExecStartPre=/usr/bin/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 15716 (mysqld_safe)
CGroup: /system.slice/mysqld.service
├─15716 /bin/sh /usr/bin/mysqld_safe
└─15862 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mysqld...

Feb 23 09:27:42 server1.centos7-test.com mysql-systemd-start[15664]: Alternatively you can run:
Feb 23 09:27:42 server1.centos7-test.com mysql-systemd-start[15664]: /usr/bin/mysql_secure_installation
Feb 23 09:27:42 server1.centos7-test.com mysql-systemd-start[15664]: which will also give you the option of removing the test
Feb 23 09:27:42 server1.centos7-test.com mysql-systemd-start[15664]: databases and anonymous user created by default. This is
Feb 23 09:27:42 server1.centos7-test.com mysql-systemd-start[15664]: strongly recommended for production servers.
Feb 23 09:27:42 server1.centos7-test.com mysql-systemd-start[15664]: See the manual for more instructions.
Feb 23 09:27:42 server1.centos7-test.com mysql-systemd-start[15664]: Please report any problems at http://bugs.mysql.com/
Feb 23 09:27:42 server1.centos7-test.com mysqld_safe[15716]: 160223 09:27:42 mysqld_safe Logging to '/var/log/mysqld.log'.
Feb 23 09:27:42 server1.centos7-test.com mysqld_safe[15716]: 160223 09:27:42 mysqld_safe Starting mysqld daemon with databases from /v.../mysql
Feb 23 09:27:44 server1.centos7-test.com systemd[1]: Started MySQL Community Server.
Hint: Some lines were ellipsized, use -l to show in full.

[root@server1 ~]# mysql --version
mysql Ver 14.14 Distrib 5.5.48, for Linux (x86_64) using readline 5.1

Now you can run the MySQL secure installation script to secure your MySQL installation by removing remote root login, setting root password, disabling anonymous users etc as needed.

root@server1 ~]# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MySQL to secure it, we'll need the current
password for the root user. If you've just installed MySQL, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MySQL
root user without the proper authorisation.

Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MySQL installation has an anonymous user, allowing anyone
to log into MySQL without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
... Success!

Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
... Success!

By default, MySQL comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] n
... skipping.

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
... Success!

Cleaning up...

All done! If you've completed all of the above steps, your MySQL
installation should now be secure.

Thanks for using MySQL!

Now it's time for PHP. Install the PHP with all required modules.

[root@server1 ~]# yum install php-mcrypt php php-common php-pdo php-cli php-mysql php-gd php-xml libtool-ltdl mhash mcrypt -y

[root@server1 ~]# php -v
PHP 5.4.16 (cli) (built: Jun 23 2015 21:17:27)
Copyright (c) 1997-2013 The PHP Group
Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies

3. Create a Database/User

Now create a database for Prestashop installation. I created a database namely prestashopdb and user prestashopuser prior to the installation. You can do it from MySQL CLI or you can install PhpMyadmin and manage databases using that.

[root@server1 ~]# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 5.5.48 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create database prestashopdb;
Query OK, 1 row affected (0.00 sec)

mysql> GRANT ALL ON prestashopdb.* TO prestashopuser@localhost IDENTIFIED BY 'prestashop123#';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> quit
Bye

4. Confirm the installation of the PHP modules GD, Mcrypt, Mbstring and PDO MySQL

PHP modules like GD and PDO MySQL are installed during the initial PHP setup. I need to enable the Mcrypt and MBstring module to complete the pre-requisites for the Prestashop installation.

Mcrypt Installation:

Install EPEL repo for YUM
yum -y install epel-release
yum install php-mcrypt -y

MBstring installation

yum install php-mbstring -y

Installing Prestashop

Download the latest Prestashop version from the link and extract it in the home folder. Modify the permissions of the folders/files to 755.

root@server1 home]# unzip prestashop_1.6.1.2.zip
root@server1 prestashop]# chmod -R 755 *.*
[root@server1 prestashop]# ll
total 160
drwxr-xr-x 2 root root 4096 Feb 23 09:45 Adapter
drwxr-xr-x 9 root root 4096 Feb 23 09:45 admin
-rwxr-xr-x 1 root root 12320 Oct 29 16:16 architecture.md
drwxr-xr-x 8 root root 4096 Feb 23 09:45 cache
drwxr-xr-x 17 root root 4096 Feb 23 09:45 classes
drwxr-xr-x 3 root root 4096 Feb 23 09:45 config
-rwxr-xr-x 1 root root 3617 Oct 29 16:16 CONTRIBUTING.md
-rwxr-xr-x 1 root root 5847 Oct 29 16:17 CONTRIBUTORS.md
drwxr-xr-x 4 root root 4096 Feb 23 09:45 controllers
drwxr-xr-x 4 root root 4096 Feb 23 09:45 Core
drwxr-xr-x 2 root root 4096 Feb 23 09:45 css
drwxr-xr-x 4 root root 4096 Feb 23 09:45 docs
drwxr-xr-x 2 root root 4096 Feb 23 09:45 download
-rwxr-xr-x 1 root root 2454 Oct 29 16:16 error500.html
-rwxr-xr-x 1 root root 1218 Oct 29 16:16 footer.php
-rwxr-xr-x 1 root root 1247 Oct 29 16:16 header.php
-rwxr-xr-x 1 root root 4717 Oct 29 16:16 images.inc.php
drwxr-xr-x 18 root root 4096 Feb 23 09:45 img
-rwxr-xr-x 1 root root 1068 Oct 29 16:16 index.php
-rwxr-xr-x 1 root root 1154 Oct 29 16:16 init.php
drwxr-xr-x 12 root root 4096 Feb 23 09:45 install
drwxr-xr-x 7 root root 4096 Feb 23 09:45 js
drwxr-xr-x 2 root root 4096 Feb 23 09:45 localization
drwxr-xr-x 2 root root 4096 Feb 23 09:45 log
drwxr-xr-x 3 root root 4096 Feb 23 09:45 mails
drwxr-xr-x 79 root root 4096 Feb 23 09:45 modules
drwxr-xr-x 5 root root 4096 Feb 23 09:45 override
drwxr-xr-x 2 root root 4096 Feb 23 09:45 pdf
-rwxr-xr-x 1 root root 6576 Oct 29 16:16 README.md
drwxr-xr-x 3 root root 4096 Feb 23 09:45 themes
drwxr-xr-x 18 root root 4096 Feb 23 09:45 tools
drwxr-xr-x 3 root root 4096 Feb 23 09:45 translations
drwxr-xr-x 2 root root 4096 Feb 23 09:45 upload
drwxr-xr-x 2 root root 4096 Feb 23 09:45 webservice

root@server1 home]# cp -rp prestashop/* /var/www/saheetha.com/public_html/

Now copy the prestashop folder contents from /home to document root of the required domain which is meant to be our online store. It is this path "/var/www/saheetha.com/public_html/" in my case.

Now open up in your browser the URL >>http://domain.com/install/

Please navigate through the screenshots which describes each installation stage.

Stage 1 : Language Selection

prest1

Stage 2 : License Agreement 

Agree the terms and conditions in the license agreement and click "Next" to proceed further.

 

license2

Stage 3 : System Compatibility check 

It will check for the installation of the required PHP modules and folders/file permissions to continue with the installation.

prestashop3

Stage 4: Creating your own Store information:

Pres5DBconnec

Stage 6 : Installation Stage

pres6config

Stage 7 : Final Stage

It will provide you with the login credentials to manages your Online store.

pres7

Now you're all set with your installation.  Please make sure to delete your "Install" folder from your domain document root for security reasons.

How can we access the Admin Panel?

Before accessing the admin Panel for our installation, you need to rename your "admin" folder under the installation domain document root to some other name for security reasons. Or else you will get a message like this on the browser while accessing your admin panel.

For security reasons, you cannot connect to the back office until you have
renamed the /admin folder (e.g. admin847v0u8kk/)
Please then access this page by the new URL (e.g. http://saheetha.com/admin847v0u8kk/)

I renamed my admin folder and accessed my admin panel with the login credentials.  You can manage your products, orders, customers, price details etc from this.

prestashop_saheethaadmin

Now you can head over to the Prestashop user manuals  to learn more about managing your Online store.

You see how easy you can build up an online webstore using this software. Congratulation on your new venture with e-shops :).  I hope you enjoyed reading this article. I recommend your valuable comments and suggestions on this.

Have a Good Day!

The post How to Install PrestaShop on CentOS 7 appeared first on LinOxide.

How to Setup Nylas N1 Open Source Mail Client on Modern Web

$
0
0

Hi All, today we are going to setup an awesome and beautiful Mail client on Linux Ubuntu 15 and CentOS 7 Desktops that is called Nylas N1. Nylas N1 is a new open source email client application licensed under GPLv3 which is created by Nylas. This is an extensible mail client that is now available for Linux, Mac and Windows based systems with a user friendly web interface as its designed by clean typography and delightful buttons that feels similar especially for gmail users. It supports rich plugins for different features that are built on fully-supported APIs that are end-to-end tested and compatible with hundreds of email providers, including Gmail, Yahoo, iCloud, Microsoft Exchange, and more. So it's easy to create new experiences and work flows around email using Nylas N1.

Installing Nylas N1 on CentOS 7:

The latest released package of Nylas N1 is available to download for cross platforms. We are going to install it on CentOS 7 Desktop. To download the RPM package you need to go Nylas N1 Dowload Page and copy its source link so that you use 'wget' command to get its package on your system using below command.

$ wget https://github.com/nylas/N1/releases/download/0.4.9/N1-0.4.9.rpm

Nylas rpm

Once the rpm package is downloaded, simply run the installation command using 'rpm' command with '-i' as shown below with sudo user.

$ sudo rpm -i N1-0.4.9.rpm

Installing Nylas N1 on Ubuntu 15:

The installation process of N1 Email client is simple, now we just need to download the Debian package for the N1 email client.Follow the download page of Nylas N1 and click on the N1.deb package your installation N1 Email client on your Ubuntu system.

Nylas Debian

After downloading the package go to the directory where you have downloaded the debian package and run the following command to install the package.

$ sudo dpkg -i N1.deb

Installing Nylas N1

Nylas N1 Application Setup:

Now you have successfully installed N1 Email client on your CentOS 7 and Ubuntu 15 Desktops. To launch the N1 client point to the 'Accessories' bar under Applications tab and click on the 'Nylas N1'.

Nylas N1 Launch

Welcome the Next Generation Email platform, Say hello and click on the Continue button.

Nylas setup

N1 is developed with modern web technologies, so the developers are welcome to add rich functionality to N1 with easy to use JavaScript.

N1 development

N1 has made possible using its Nylas Sysnc Engine that provides secured bank-grade encryption and a modern API layer for email,contacts and calenders. Let's click on the Get Started button to reveal its features.

Nylas Sync Engine

Now choose your email provider to use on Nylas Email Client, here I am going to use my gmail account.

Email Provider

In the next step you need to Allow access to Nylas Email Client for the basic information of you email account.

Allow access

That's it. You are all set and ready to use Nylas N1 Email Client.

N1 set

The uniqueness of Nylas is not that it is open-source or well designed. It’s that it’s extensible like a web-browser. New features, plugins and themes can be added to N1 easily. Developers can write extensions in JavaScript, React, NodeJS, Flux and Electron.

Click on the 'Install' button if you wish to install from the example plugins and click on the Start using N1.

N1 Plugins

Here you will get all your Gmail inbox sysnced with Nylas N1 and have access to your emails on N1 email client.

N1 email Client

Conclusion:

We have you have enjoyed following this article and you will really find this a unique email client. N1 is designed to be hacked on and extended with plugins and extensions. The best thing about N1 is that it’s an open-source project. If a feature is missing,or something needs fixing anyone can dive in an play with the code. You choose your appropriate settings from its multiple available features or do some of its available plugins that best suits your needs to make N1 more important and good looking for you. Do not forget to share your thoughts and leave your valuable comments and suggestions.

The post How to Setup Nylas N1 Open Source Mail Client on Modern Web appeared first on LinOxide.

A Guide to Install and Use ZFS on CentOS 7

$
0
0

ZFS, short form of Zettabyte Filesystem is an advanced and highly scalable filesystem. It was originally developed by Sun Microsystems and is now part of the OpenZFS project. With so many filesystems available on Linux, it is quite natural to ask what is special about ZFS.  Unlike other filesystems, it is not just a filesystem but a logical volume manager as well.  Some of the features of ZFS that make it popular are:

  • Data Integrity -  data consistency and integrity are ensured through copy-on-write and checksum techniques
  • Pooling of storage space - available storage drives can be put together into a single pool called zpool
  • Software RAID - Setting up a raidz array is as simple as issuing a single command.
  • Inbuilt volume manager - ZFS acts as a volume manager as well.
  • Snaphots, clones, compression - these are some of the advanced features that ZFS provides.

ZFS  is a 128-bit filesystem and has the capacity to store 256 zetta bytes!  In this guide, we will be learning how to install, setup and also to use some important ZFS commands on a CentOS 7 server.

NOTE: The installation part is specific to CentOS server while the commands are common on any Linux system

Terminology

Before we move on, let us understand some of the terminologies that are commonly used in ZFS.

Pool

Logical grouping of storage drives. It is the basic building block of ZFS and it is from here that storage space gets allocated for datasets.

Datasets

The components of ZFS filesystem namely filesystem, clones, snapshots and volumes are referred to as datasets.

Mirror

A virtual device storing identical data copies on two or more disks. In situations where one disk fails, same data is available on other disks of that mirror.

Resilvering

Process of copying data from one disk to another in the event of restoring a device.

Scrub

Scrub is used for consistency check in ZFS like how fsck is used in other filesystems

Installing ZFS

In order to install ZFS on CentOS, we need to first setup the EPEL  repository for supporting packages and then the ZFS repository to install the required ZFS packages.

Note: Please prefix sudo to all the commands if you are not the root user. 

yum localinstall --nogpgcheck http://epel.mirror.net.in/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm

Now install the kernel development and zfs packages. Kernel development packages are needed as ZFS is built as a module and inserted into the kernel.

yum install kernel-devel zfs

Verify if the zfs module is inserted into the kernel using 'lsmod' command and if not, insert it manually using 'modprobe' command.

[root@li1467-130 ~]# lsmod |grep zfs

[root@li1467-130 ~]# modprobe zfs

[root@li1467-130 ~]# lsmod |grep zfs
zfs 2790271 0
zunicode 331170 1 zfs
zavl 15236 1 zfs
zcommon 55411 1 zfs
znvpair 89086 2 zfs,zcommon
spl 92029 3 zfs,zcommon,znvpair

Let us check if we are able to use the zfs commands:

[root@li1467-130 ~]# zfs list
no datasets available

Administration

ZFS has two main utilities, zpool and zfs.  While zpool deals with creation and maintenance of pools using disks zfs utility is responsible for creation and maintenance of datasets.

zpool utility

Creating and destroying pools

First verify the disks available for you to create a storage pool.

[root@li1467-130 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,  0  Mar 16 08:12 /dev/sda
brw-rw---- 1 root disk 8, 16 Mar 16 08:12 /dev/sdb
brw-rw---- 1 root disk 8, 32 Mar 16 08:12 /dev/sdc
brw-rw---- 1 root disk 8, 48 Mar 16 08:12 /dev/sdd
brw-rw---- 1 root disk 8, 64 Mar 16 08:12 /dev/sde
brw-rw---- 1 root disk 8, 80 Mar 16 08:12 /dev/sdf

Create a pool from a set of drives.

zpool create <option> <pool name. <drive 1> <drive 2> .... <drive n>

[root@li1467-130 ~]# zpool create -f zfspool sdc sdd sde sdf

'zpool status' command displays the status of the available pools

[root@li1467-130 ~]# zpool status
pool: zfspool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zfspool ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0

errors: No known data errors

Verify if the pool creation was successful.

[root@li1467-130 ~]# df -h
Filesystem    Size   Used      Avail  Use%   Mounted on
/dev/sda      19G    1.4G        17G      8%      /
devtmpfs    488M        0      488M      0%     /dev
tmpfs          497M        0      497M      0%    /dev/shm
tmpfs          497M    50M     447M     11%   /run
tmpfs          497M         0     497M      0%   /sys/fs/cgroup
tmpfs          100M         0     100M      0%   /run/user/0
zfspool         3.7G         0       3.7G      0%  /zfspool

As you can see, zpool has created a pool by name 'zfspool' of size 3.7 GB and has also mounted it in /zfspool.

To destroy a pool, use the 'zpool destroy' command

zpool destroy <pool name>

[root@li1467-130 ~]# zpool destroy zfspool
[root@li1467-130 ~]# zpool status
no pools available

Let us now try creating a simple mirror pool.

zpool create <option> <pool name> mirror <drive 1> <drive 2>... <drive n>

We can also create multiple mirrors at the same time by repeating the mirror keyword followed by the drives.

[root@li1467-130 ~]# zpool create -f mpool mirror sdc sdd mirror sde sdf
[root@li1467-130 ~]# zpool status
pool: mpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
mpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0

errors: No known data errors

In the above example, we have created mirror pools each with two disks.

Similarly, we can create a raidz pool.

[root@li1467-130 ~]# zpool create -f rpool raidz sdc sdd sde sdf
[root@li1467-130 ~]# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0

errors: No known data errors

Managing devices in ZFS pools

Once a pool is created, it is possible to add or remove hot spares and cache  devices from the pool, attach or detach devices from mirrored pools and replace devices. But non-redundant and raidz devices cannot be removed from a pool.  We will see how to perform some of these operations in this section.

I'm first creating a pool called 'testpool' consisting of two devices, sdc and sdd.  Another device sde will then be added to this.

[root@li1467-130 ~]# zpool create -f testpool sdc sdd

[root@li1467-130 ~]# zpool add testpool sde
[root@li1467-130 ~]# zpool status
pool: testpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0

errors: No known data errors

As mentioned earlier, I cannot remove this newly added device as it is not a redundant or raidz pool.

[root@li1467-130 ~]# zpool remove testpool sde
cannot remove sde: only inactive hot spares, cache, top-level, or log devices can be removed

But I can add a spare disk to this pool and remove it.

[root@li1467-130 ~]# zpool add testpool spare sdf
[root@li1467-130 ~]# zpool status
pool: testpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
spares
sdf AVAIL

errors: No known data errors
[root@li1467-130 ~]# zpool remove testpool sdf
[root@li1467-130 ~]# zpool status
pool: testpool
state: ONLINE
scan: none requested
config:

NAME        STATE       READ  WRITE   CKSUM
testpool    ONLINE       0          0               0
sdc            ONLINE       0           0               0
sdd            ONLINE      0            0               0
sde            ONLINE      0            0               0

errors: No known data errors

Similarly, we can use attach command to attach disks to a mirrored or non-mirrored pool and detach command to detach disks from a mirrored pool.

zpool attach <options> <pool name> <device> <new device>

zpool detach <pool name> <device>

When a device fails or gets corrupted, we can replace it using the 'replace' command.

zpool replace <options> <pool name> <device> <new device>

We will test this by forcefully corrupting a device in a mirrored configuration.

[root@li1467-130 ~]# zpool create -f testpool mirror sdd sde

This creates a mirror pool consisting of disks sdd and sde. Now, let us deliberately corrupt sdd drive by writing zeroes into it.

[root@li1467-130 ~]# dd if=/dev/zero of=/dev/sdd
dd: writing to ‘/dev/sdd’: No space left on device
2048001+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 22.4804 s, 46.6 MB/s

We will use the 'scrub' command to detect this corruption.

[root@li1467-130 ~]# zpool scrub testpool
[root@li1467-130 ~]# zpool status
pool: testpool
state: ONLINE
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-4J
scan: scrub repaired 0 in 0h0m with 0 errors on Fri Mar 18 09:59:40 2016
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdd UNAVAIL 0 0 0 corrupted data
sde ONLINE 0 0 0

errors: No known data errors

We will now replace sdd with sdc.

[root@li1467-130 ~]# zpool replace testpool sdd sdc; zpool status
pool: testpool
state: ONLINE
scan: resilvered 83.5K in 0h0m with 0 errors on Fri Mar 18 10:05:17 2016
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
replacing-0 UNAVAIL 0 0 0
sdd UNAVAIL 0 0 0 corrupted data
sdc ONLINE 0 0 0
sde ONLINE 0 0 0

errors: No known data errors

[root@li1467-130 ~]# zpool status
pool: testpool
state: ONLINE
scan: resilvered 74.5K in 0h0m with 0 errors on Fri Mar 18 10:00:36 2016
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sde ONLINE 0 0 0

errors: No known data errors

Migration of pools

We can migrate storage pools between different hosts using export and import commands. For this, the disks used in the pool should be available from both the systems.

[root@li1467-130 ~]# zpool export testpool
[root@li1467-130 ~]# zpool status
no pools available

The command 'zpool import' lists all the pools that are available for importing. Execute this command from the system where you want to import the pool.

[root@li1467-131 ~]# zpool import
pool: testpool
id: 3823664125009563520
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

testpool ONLINE
sdc ONLINE
sdd ONLINE
sde ONLINE

Now import the required pool

[root@li1467-131 ~]# zpool import testpool
[root@li1467-131 ~]# zpool status
pool: testpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0

errors: No known data errors

iostat

One can verify the io statistics of the pool devices using the iostat command.

[root@li1467-130 ~]# zpool iostat -v testpool
capacity          operations                        bandwidth
pool          alloc      free            read     write             read   write
----------    -----     -----            -----     -----                -----   -----
testpool    1.80M  2.86G        22            27               470K  417K
sdc             598K   975M           8              9               200K  139K
sdd             636K  975M            7              9                135K  139K
sde             610K   975M           6              9                 135K 139K
----------   -----     -----           -----          -----               -----  -----

zfs utility

We will now move on to the zfs utility.  Here we will take a look at how to create, destroy datasets, filesystem compression, quotas and snapshots.

Creating and destroying filesystem

ZFS filesystem can be created using the zfs create command

zfs create <filesystem>

 

[root@li1467-130 ~]# zfs create testpool/students
[root@li1467-130 ~]# zfs create testpool/professors
[root@li1467-130 ~]# df -h
Filesystem                    Size             Used          Avail          Use%          Mounted on
/dev/sda                       19G              1.4G          17G             8%                     /
devtmpfs                   488M                  0      488M             0%                    /dev
tmpfs                          497M                  0       497M            0%                   /dev/shm
tmpfs                          497M            50M       447M           11%                  /run
tmpfs                          497M                 0        497M            0%                /sys/fs/cgroup
testpool                       2.8G                  0         2.8G            0%               /testpool
tmpfs                          100M                  0        100M            0%             /run/user/0
testpool/students     2.8G                   0         2.8G             0%            /testpool/students
testpool/professors  2.8G                   0         2.8G             0%           /testpool/professors

From the above output, observe that though there is no mount point given at the time of filesystem creation, mountpoint is created using the same path relationship as that of the pool.

zfs create allows  using -o with it using which we can specify options like mountpoint, compression, quota, exec etc.

One can list the available filesystem using zfs list:

[root@li1467-130 ~]# zfs list
NAME                           USED     AVAIL     REFER    MOUNTPOINT
testpool                         100M       2.67G       19K         /testpool
testpool/professors        31K     1024M   20.5K        /testpool/professors
testpool/students        1.57M     98.4M   1.57M      /testpool/students

We can destroy a filesystem using the destroy option

zfs destroy <filesystem>

Compression

We will now understand how compression works in ZFS. Before we start using compression, we need to enable it using 'set compression'

zfs set <compression=value> <filesystem|volume|snapshot>

Once this is done, compression and decompression happens on the filesystem on the fly transparently.

In our example, I will be enabling compression on the students directory using lz4 compression algorithm.

[root@li1467-130 ~]# zfs set compression=lz4 testpool/students

I will now copy a file of size 15M into this filesystem and check the size once it is copied.

[root@li1467-130 /]# cd /var/log
[root@li1467-130 log]# du -h secure
15M secure

[root@li1467-130 ~]# cp /var/log/secure /testpool/students/

[root@li1467-130 students]# df -h .
Filesystem               Size     Used   Avail    Use%      Mounted on
testpool/students   100M   1.7M   99M        2%      /testpool/students

Notice that the size used in the filesystem is only 1.7M while the file size was 15M. We can check the compression ratio as well..

[root@li1467-130 ~]# zfs get compressratio testpool
NAME      PROPERTY         VALUE            SOURCE
testpool    compressratio     9.03x                     -

 Quotas and reservation

Let me explain quotas with a real life example. Suppose we have a requirement in a university to limit the disk space used by the filesystem for professors and students. Let us assume that we need to allocate 100MB for students and 1GB for professors. We can make use of 'quotas' in ZFS to fulfill this requirement. Quotas ensure that the amount of disk space used by a filesystem doesn't exceed the limits set. Reservation helps in actually allocating and guaranteeing that the required amount of disk space is available for the filesystem.

zfs set quota=<value> <filesystem|volume|snapshot>

zfs set reservation=<value> <filesystem|volume|snapshot>

 

[root@li1467-130 ~]# zfs set quota=100M testpool/students
[root@li1467-130 ~]# zfs set reservation=100M testpool/students
[root@li1467-130 ~]# zfs list
NAME                          USED      AVAIL    REFER    MOUNTPOINT
testpool                        100M       2.67G       19K        /testpool
testpool/professors      19K       2.67G        19K       /testpool/professors
testpool/students      1.57M       98.4M    1.57M    /testpool/students

[root@li1467-130 ~]# zfs set quota=1G testpool/professors
[root@li1467-130 ~]# zfs list
NAME                           USED     AVAIL    REFER    MOUNTPOINT
testpool                         100M     2.67G       19K          /testpool
testpool/professors       19K    1024M       19K         /testpool/professors
testpool/students       1.57M    98.4M    1.57M       /testpool/students

In the above example, we have allocated 100MB for students and 1GB for professors. Observe the 'AVAIL' column in 'zfs list'.  Initially they had the size of 2.67GB each and after setting the quota, the values have changed accordingly.

Snapshots

Snapshots are read-only copy of the ZFS filesystem at a given point in time. They do not consume any extra space in zfs pool. We can either roll back to the same state at a later stage or extract only a single or a set of files as per the user requirement.

I will now create some directories and a file under '/testpool/professors' from our previous example and then take a snapshot of this filesystem.

[root@li1467-130 ~]# cd /testpool/professors/

[root@li1467-130 professors]# mkdir maths physics chemistry

[root@li1467-130 professors]# cat > qpaper.txt
Question paper for the year 2016-17
[root@li1467-130 professors]# ls -la
total 4
drwxr-xr-x  5  root root    6   Mar 19 10:34 .
drwxr-xr-x  4  root root    4   Mar 19 09:59 ..
drwxr-xr-x  2  root root    2   Mar 19 10:33 chemistry
drwxr-xr-x  2  root root    2   Mar 19 10:32 maths
drwxr-xr-x  2  root root    2   Mar 19 10:32 physics
-rw-r--r--     1  root root  36   Mar 19 10:35 qpaper.txt

To take a snapshot, use the following syntax:

zfs snapshot <filesystem|volume@<snap>>

 

[root@li1467-130 professors]# zfs snapshot testpool/professors@03-2016
[root@li1467-130 professors]# zfs list -t snapshot
NAME                                             USED         AVAIL     REFER     MOUNTPOINT
testpool/professors@03-2016       0                -                20.5K          -

I'll now delete the file that was created and extract it from the snapshots

[root@li1467-130 professors]# rm -rf qpaper.txt
[root@li1467-130 professors]# ls
chemistry maths physics
[root@li1467-130 professors]# cd .zfs
[root@li1467-130 .zfs]# cd snapshot/03-2016/
[root@li1467-130 03-2016]# ls
chemistry maths physics qpaper.txt

[root@li1467-130 03-2016]# cp -a qpaper.txt /testpool/professors/
[root@li1467-130 03-2016]# cd /testpool/professors/
[root@li1467-130 professors]# ls
chemistry maths physics qpaper.txt

The deleted file is back in its place.

We can list all the available snapshots using zfs list:

[root@li1467-130 ~]# zfs list -t snapshot
NAME                                             USED     AVAIL    REFER    MOUNTPOINT
testpool/professors@03-2016    10.5K       -              20.5K       -

Finally, let's destroy the snapshot using the zfs destroy command:

zfs destroy <filesystem|volume@<snap>>

 

[root@li1467-130 ~]# zfs destroy testpool/professors@03-2016

[root@li1467-130 ~]# zfs list -t snapshot
no datasets available

Conclusion

In this article, you have learnt how to install ZFS on CentOS 7 and use some basic and important commands from zpool and zfs utilities. This is not a comprehensive list. ZFS has much more capabilities and you can explore them further from its official page.

The post A Guide to Install and Use ZFS on CentOS 7 appeared first on LinOxide.

How to Build a Minecraft Server on your CentOS 7

$
0
0

MINECRAFT is an open world video game developed in Java, originally created by Markus "Notch" Persson and maintained by Mojang AB. It is presently owned by Microsoft studios as well. This game involves players interacting within by placing and breaking various types of blocks in a three-dimensional environment. The players can even  collect resources, build structures, battle mobs, manage hunger, explore the land  simply, creates & destroys structures on both multiplayer servers and singleplayer worlds across multiple game modes. These are the six game modes available as below:

  • Survival
  • Creative
  • Hardcore
  • Adventure
  • Spectator
  • Demo

In this article, I'm discussing on how to setup a Minecraft server on a CentOS 7 build.

minecraft-server-logo

First of all, let me go through the installation requirements.

Prerequisites

  •  VPS or Dedicated servers with SSH access
  •  RAM : 1GB or more
  •  Disk Space : 5GB or more
  •  Install the latest Java compatible with the OS architecture.
  •  Disable Selinux

Let us start with the installation procedures. We need to install the latest Java version for the server depending on its architecture.

Install JAVA

Minecraft server requires the latest JAVA version to be installed and running.

root@server1 ~]#yum install java-1.6.0-openjdk

===============================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================
Installing:
java-1.6.0-openjdk x86_64 1:1.6.0.38-1.13.10.0.el7_2 updates 42 M
Installing for dependencies:
alsa-lib x86_64 1.0.28-2.el7 base 391 k
flac-libs x86_64 1.3.0-5.el7_1 base 169 k
fontconfig x86_64 2.10.95-7.el7 base 228 k
fontpackages-filesystem noarch 1.44-8.el7 base 9.9 k
giflib x86_64 4.1.6-9.el7 base 40 k
gsm x86_64 1.0.13-11.el7 base 30 k
javapackages-tools noarch 3.4.1-11.el7 base 73 k
libICE x86_64 1.0.9-2.el7 base 65 k
libSM x86_64 1.2.2-2.el7 base 39 k
libX11 x86_64 1.6.3-2.el7 base 605 k
libX11-common noarch 1.6.3-2.el7 base 162 k
libXau x86_64 1.0.8-2.1.el7 base 29 k
libXext x86_64 1.3.3-3.el7 base 39 k
libXi x86_64 1.7.4-2.el7 base 40 k
libXrender x86_64 0.9.8-2.1.el7 base 25 k
libXtst x86_64 1.2.2-2.1.el7 base 20 k
libasyncns x86_64 0.8-7.el7 base 26 k
libjpeg-turbo x86_64 1.2.90-5.el7 base 134 k
libogg x86_64 2:1.3.0-7.el7 base 24 k
libpng x86_64 2:1.5.13-7.el7_2 updates 213 k
libsndfile x86_64 1.0.25-10.el7 base 149 k
libvorbis x86_64 1:1.3.3-8.el7 base 204 k
libxcb x86_64 1.11-4.el7 base 189 k
libxslt x86_64 1.1.28-5.el7 base 242 k
pulseaudio-libs x86_64 6.0-7.el7 base 576 k
python-javapackages noarch 3.4.1-11.el7 base 31 k
python-lxml x86_64 3.2.1-4.el7 base 758 k
tzdata-java noarch 2016a-1.el7 updates 176 k

These many packages will be installed. Now we need to download the Minecraft server package from there website into the "minecraft" folder.

Create a MINECRAFT folder

Create a minecraft folder for the installation and other game files. It is always advised to run this executable inside a dedicated folder, as it creates several configuration files. This will make it more easier to organize and locate all the files.

root@server1 ~]#mkdir minecraft
[root@server1 ~]# cd minecraft

Download the Minecraft server jar file

Download the minecraft .jar file to the minecraft folder and modify the .jar file permissions to make it executable.

[root@server1 minecraft]# wget https://minecraft.net/download/minecraft_server.jar
--2016-03-09 07:28:39-- https://minecraft.net/download/minecraft_server.jar
Connecting to minecraft.net (minecraft.net)|54.192.151.239|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://s3.amazonaws.com/MinecraftDownload/launcher/minecraft_server.jar [following]
--2016-03-09 07:28:39-- https://s3.amazonaws.com/MinecraftDownload/launcher/minecraft_server.jar
Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.81.212
Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.81.212|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2360903 (2.3M) [application/octet-stream]
Saving to: ‘minecraft_server.jar’

100%[=====================================================================================================>] 23,60,903 84.3KB/s in 28s

2016-03-09 07:29:09 (81.7 KB/s) - ‘minecraft_server.jar’ saved [2360903/2360903]

[root@server1 minecraft]# chmod +x minecraft_server.jar

Make sure Screen is installed in your server or else run this command to install screen for your server.

[root@server1 minecraft]# yum install screen

Run the Application

Now get into a screen session and run the minecraft .jar file as per the required resolution and hardware settings. Both the Java and the executable version can be run from the command line with extra parameters to configure depending on the memory, graphical interface, mode, architecture etc.

Depending on my server capability, I prefer to ran my Minecraft server on 512MB without graphical interface to lower the CPU and memory resource usages.

root@server1 minecraft]# java -Xmx512M -Xms512M -jar minecraft_server.jar nogui

The launching text will look like this:
229 recipes
27 achievements
2016-03-09 07:30:09 [INFO] Starting minecraft server version 1.5.2
2016-03-09 07:30:09 [WARNING] To start the server with more ram, launch it as "java -Xmx1024M -Xms1024M -jar minecraft_server.jar"
2016-03-09 07:30:09 [INFO] Loading properties
2016-03-09 07:30:09 [WARNING] server.properties does not exist
2016-03-09 07:30:09 [INFO] Generating new properties file
2016-03-09 07:30:09 [INFO] Default game type: SURVIVAL
2016-03-09 07:30:09 [INFO] Generating keypair
2016-03-09 07:30:09 [INFO] Starting Minecraft server on *:25565
2016-03-09 07:30:09 [WARNING] Failed to load operators list: java.io.FileNotFoundException: ./ops.txt (No such file or directory)
2016-03-09 07:30:09 [WARNING] Failed to load white-list: java.io.FileNotFoundException: ./white-list.txt (No such file or directory)
2016-03-09 07:30:09 [INFO] Preparing level "world"
2016-03-09 07:30:10 [INFO] Preparing start region for level 0
2016-03-09 07:30:11 [INFO] Preparing spawn area: 4%
2016-03-09 07:30:12 [INFO] Preparing spawn area: 9%
2016-03-09 07:30:13 [INFO] Preparing spawn area: 16%
2016-03-09 07:30:14 [INFO] Preparing spawn area: 24%
2016-03-09 07:30:15 [INFO] Preparing spawn area: 35%
2016-03-09 07:30:16 [INFO] Preparing spawn area: 45%
2016-03-09 07:30:17 [INFO] Preparing spawn area: 55%
2016-03-09 07:30:18 [INFO] Preparing spawn area: 61%
2016-03-09 07:30:19 [INFO] Preparing spawn area: 70%
2016-03-09 07:30:20 [INFO] Preparing spawn area: 78%
2016-03-09 07:30:21 [INFO] Preparing spawn area: 84%
2016-03-09 07:30:22 [INFO] Preparing spawn area: 95%
2016-03-09 07:30:23 [INFO] Done (13.396s)! For help, type "help" or "?"

You can get back to your normal screen by press ctrl +A +D

You can get back to the screen where Minecraft is running by using the screen resume command.
# screen -r (screenid)
You can even run this executable using 1GB memory or more depending on your server specifications. This is how we run for 1GB memory usage.

java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui
Tip: If you want to spare more memory, you may set the -Xms parameter with a lower value, say:

java -Xms512M -Xmx1024M -jar minecraft_server.jar nogui

The parameter controls how much memory is reserved on startup. Your server will start with 512MB RAM and whenever it needs more memory it will allocate some until it reaches the allotted maximum value of 1GB.

Hurray!! Your Minecraft is all set and it should be running good. You can have your friends login to your server and start building.

You can have any number of players depending on your server resources. There is a software called Multicraft to manage your Minecraft servers. You can download this software from their official website and use it as a user friendly control Panel managing your MC servers.
I hope you enjoyed reading this article. I would recommend your valuable suggestions and comments on this.

Thank you and have a great day :)

The post How to Build a Minecraft Server on your CentOS 7 appeared first on LinOxide.


How to Setup Centralized Backup Server with Amanda On CentOS 7

$
0
0

Amanda (Advanced Maryland Automatic Network Disk Archiver) is the most popular open source backup and recovery software in the world that protects more than a million servers and desktops running various versions of Linux, UNIX, BSD, Mac OS-X and Microsoft Windows operating systems worldwide. Amanda supports tapes, disks, optical media and changers. It gives us the capability to use disk storage as backup media. Configuring, initiating and verifying a backup will complete the backup cycle within 30 minutes. Amanda has been used successfully in environments from one standalone machine to hundreds of clients. It can save you from expensive proprietary backup software and those custom backup scripts that have a propensity to break at the worst times.

In this article we will show how you can :

  • Install and configure the Amanda backup server.
  • Set backup parameters.
  • Verify the configuration and Verify the backup.
  • Install and configure the Amanda Linux clients for backup.

Step 1: Installing Amanda on CentOS 7

We are going to start from our first step by installing Amanda backup server on CentOS 7 server. Open the command line terminal of your CentOS 7 host using your root user credentials and setup its IP and FQDN. Run the following commands to setup the hostname of your Amanda backup server.

# hostnamectl set-hostname amanda-server

# vi /etc/hosts
192.168.10.177 amanda-server amanda-server.linoxide.com

Make sure that you are connected to the Internet for installing updates and Amanda server packages. Run the following command to update your system with latest updates and patches.

# yum update

Once your system is updated you can start installation of Amanda Backup Server using 'yum' command as its packages are available its default EPEL repository. Go ahead by running the following command and press 'y' key to proceed its installation including its dependencies.

[root@amanda-server ~]# yum install amanda*

Amanda Installation

Amanda will be executed by xinetd, so we need to install this along with some of its required packages for Amanda on the system.

# yum install xinetd gnuplot perl-ExtUtils-Embed

Amanda Dependencies

Step 2: Starting Xinetd Service

Now we have xinetd and Amanda backup server installed on our CentOS 7 Operating system. Let's start the 'xinetd' service using the command as shown below.

[root@amanda ~]# service xinetd restart

Verify the amanda installation after its successful installation using the following command.

[root@amanda ~]# amadmin --version
amadmin-3.3.3

xinetd start

Step 3: Amanda Configurations Setup

First we will make some directories uisng the root user, but make sure and confirm your Amanda user, that probably "amandabackup" or "amanda" or "backup", depending on how you installed Amanda. We are using the default 'amandabackup' here to assign the ownership of the following directory structure.

[root@amanda ~]# mkdir -p /amanda /etc/amanda

[root@amanda ~]# chown amandabackup /amanda /etc/amand

Now switch to your 'amandabackup' user and run the following commands.

[root@amanda ~]# su - amandabackup

-bash-4.2$ mkdir -p /amanda/vtapes/slot{1,2,3,4}

-bash-4.2$ mkdir -p /amanda/holding

-bash-4.2$ mkdir -p /amanda/state/{curinfo,log,index}

-bash-4.2$ mkdir -p /etc/amanda/MyConfig

So, all of the data will be under '/amanda' folder but you can put them wherever you would like to do. Now, we are going to add an 'amanda.conf' file at '/etc/amanda/MyConfig/' directory with the following contents.
This is the main configuration file for Amanda, the Advanced Maryland Automatic Network Disk Archiver. Lets open this configuration file using your best editor and put the following contents in it. Keep in mind that you should edit 'dumpuser' appropriately if your Amanda user has another name.

-bash-4.2$ vi /etc/amanda/MyConfig/amanda.conf

org "MyConfig"
infofile "/amanda/state/curinfo"
logdir "/amanda/state/log"
indexdir "/amanda/state/index"
dumpuser "amandabackup"

tpchanger "chg-disk:/amanda/vtapes"
labelstr "MyData[0-9][0-9]"
autolabel "MyData%%" EMPTY VOLUME_ERROR
tapecycle 4
dumpcycle 3 days
amrecover_changer "changer"

tapetype "TEST-TAPE"
define tapetype TEST-TAPE {
length 100 mbytes
filemark 4 kbytes
}

define dumptype simple-gnutar-local {
auth "local"
compress none
program "GNUTAR"
}

holdingdisk hd1 {
directory "/amanda/holding"
use 50 mbytes
chunksize 1 mbyte
}

There are a number of configuration parameters that control the behavior of the Amanda programs. All have default values, so you need not specify the parameter in amanda.conf if the default is suitable. You can find the orginal Amanda configuration file under the '/etc/amanda/DailySet1/' directory.

Next, we will add a 'disklist' file with a single disk list entry (DLE). The 'disklist' file determines which disks will be backed up by Amanda. The file contains includefile directive or disklist entry (DLE). General usage was to describe a DLE as a partition, or file system.

-bash-4.2$ vi /etc/amanda/MyConfig/disklist

localhost /etc simple-gnutar-local

Save and close the file using ':wq!' when you are using 'vi' or 'vim' editor. So, we have done the configurations let's move to the next step.

Step 4: Check Amanda Configuration

Amanda has a nice utility called 'amcheck' which can check a configuration for you. Running it on to test configuration that gives you the results of your configurations. Note that almost all Amanda commands take the configuration name as the first argument like in our case it is "MyConfig".

Let's run the following command to check the Tape Host Server configurations.

-bash-4.2$ amcheck MyConfig

Check configuration

Amcheck runs a number of self-checks on both the Amanda tape server host and the Amanda client hosts.
On the tape server host, amcheck can go through the same tape checking used at the start of the nightly amdump run to verify the correct tape for the next run is mounted. It can also do a self-check on all client hosts to make sure each host is running and that permissions on filesystems to be backed up are correct.

You can specify many host/disk expressions, only disks that match an expression will be checked. All disks are checked if no expressions are given.

Step 5: Run Test Backup

The test results are positive as we have seen that there is no such error found that forced us to move forward. The tool to run backups is 'amdump'. It takes only the configuration name which doesn't print anything to the terminal in its out put. Let's run as the Amanda user as shown below.

-bash-4.2$ amdump MyConfig

It will took few seconds then you probably will get not output. On the very next line, run the following command and that should give '0' in output. if you see something other than zero, then the backup failed.

-bash-4.2$ echo $?
0

Amdump is the main interface to the Amanda backup process. It loads the specified configuration and attempts to back up every disk specified by the 'disklist'. Amdump is normally run by 'cron' that we will show you in next steps.

But, if you see something other than the zero, then it means you backup failed. In that case, you can see a handy report of what happened to the backup by using the 'amreport' command along with your configuration file.

-bash-4.2$ amreport MyConfig

Amreport will generates a summary report of an Amanda backup run as shown in the below image.

Amanda Backup Report

Step 6: Amanda Backup Scheduling

For daily execution of 'amdump', it can be scheduled via cron daemon. Nobody wants to remember to run the backups every night. That's why we have cron! Let's Add the following lines .

-bash-4.2$ crontab -e

0 17 * * * amandabackup /usr/sbin/amcheck -m MyConfig
15 2 * * * amandabackup /usr/sbin/amdump MyConfig

Save and close the crontab editor. These lines will schedule a backup everyday at 17:00 and 2:15.

If you login to your root user then you can use the following command to add cron job for your Amada user.

# su amadabackup -c "crontab -e"

But depending on how you've installed Amanda, you may need to change '/usr/sbin' to something else after finding out where your distro has put the Amanda tool. You can use 'which amcheck' on the command line to find the process location.

# which amcheck
/usr/sbin/amcheck

Amcheck can email you for problems for what we have used the '-m' flag in the crontab, and amdump will happily email you a report every night. Automation is no good if you never find out something is broken. So, just add a 'mailto' configuration to your 'amanda.conf' file.

-bash-4.2$ vi /etc/amanda/MyConfig/amanda.conf

mailto "user@domain.com"
:wq!

Step 7: Amanda Backup Client Installation

In this section we will describes on how to install and configure our virtual machines in order to get backed up by the Amanda backup servers we have just setup in previous steps.

We are going to use another CentOS 7 server to setup Amanda Client backup installation. To install the amanda Client package run the following command.

[root@centos-bk1 ~]# yum install amanda-client xinetd

Amanda Client

Step 8: Amanda Backup Client Configuration

The '/var/lib/amanda/.amandahosts' file used to specify Amanda server location, open the same file using your editor and add the following entry and then save the changes.

[root@centos-bk1 ~]# vi /var/lib/amanda/.amandahosts

amanada_server amandabackup
:wq!

Then make sure that the same file '/var/lib/amanda/.amandahosts' must contain entries with the hostname of each AMANDA client that is allowed to use the amrecover command and 'amrecover' must be run as root.

Conclusion

Amanda simplifies the life of a System Administrator who can easily set up a single server to back up multiple networked clients to a tape- or disk-based storage system. A unique scheduler optimizes backup level for different clients in such a way that total backup time is about the same for every backup run. It frees the System Administrators from having to guess the rate of data change in their environments. I hope have have found this article much helpful, but still there are many things left to do and we will discuss those in next articles. Thank you for reading and don't forget to leave your valuable comments.

The post How to Setup Centralized Backup Server with Amanda On CentOS 7 appeared first on LinOxide.

How to Setup Ansible Automation Tool in CentOS 7

$
0
0

Hello and welcome to our today's most important article on Ansible Automation Tool that is similar to Chef or Puppet. First of all Ansible is easy to install, simple to configure and easy to understand. In IT its very important to keep your systems and processes very simple. Ansible is used for configuration management that helps in configuring your web and application servers and make it easy to version your files and you can also use it to manage different configurations in your development, staging and production environments. It is also used for application deployment. It can fully automate your multi tier application deployments that can handle multiple group servers and databases.

Ansible uses SSH to connect to servers and run the configured Tasks by connecting to the clients via SSH, no need to setup any special agent. All you need is a python and a user that can login and execute the scripts, then Ansible starts gathering facts about the machine like what Operating system and packages installed and what other services are running etc. After that Ansible run the playbooks in YAML file format, playbooks are bunch of commands which can perform multiple tasks.

Prerequisites:

In this article we will install and configure Ansible on CentOS 7 and will manage its two nodes in order to understand its functionality.

In our test environment we will be using three Linux CentOS 7 VMs , one for controlling where Ansible server is installed and two Nodes that will be managed by this controlling machine over SSH. Make sure that you have Python 2.6 or 2.7 installed on your both controller and client nodes for successful installation of Ansible.

Let's connect to your controller server using root user or non-root user with sudo privileges to getting started with Ansible.

Setup EPEL Repository

First we need to enable 'epel' repository for CentOS 7 on the controller server because Ansible package is not available in the default yum repositories, so we will be using below commands to Enable EPEL repository on CentOS 7 / RHEL 7.

# rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

Now run the command to update your operating system.

# yum -y update

Ansible EPEL

Installing Ansible :

Now we can install Ansible on CentOS 7 using the 'yum' command that will install it including its required dependencies by choosing the 'y' key to proceed as shown.

# yum install ansible

installing ansible

Once you have successfully installed Ansible, you can verify and check its installed version using the below command.

# ansible --version

Ansible version

Keys based SSH authentication with Nodes

In order to perform any deployment/management from the 'localhost' to remote host first we need to Generate keys on the Ansible server and copy public key to the client nodes. Run the below command on your Ansible server to generate its public and private keys.

#ssh-keygen -t rsa -b 4096

ssh keygen

After generating the SSH Key, now copy it to the remote server by using following command to place SSH keys on remote hosts.

# ssh-copy-id root@node1_ip

If you are using any custom ssh port then mention it using the '-p' parameter in your command. You will be asked for the password of your client node, once you have provided the right password of your client node then it will be successfully authorized.

# ssh-copy-id -p2178 root@node1_ip

The authenticity of host '[72.25.70.83]:2178 ([72.25.70.83]:2178)' can't be established.
ECDSA key fingerprint is 49:8a:9c:D9:35:le:09:3d:5f:31:43:a1:41:94:70:53.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Authorized uses only. All activity may be \ monitored and reported.
root@72.25.70.83's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh -p '2178' 'root@72.25.70.83'"
and check to make sure that only the key(s) you wanted were added.

You can also manually add the public rsa key of your controlling server to the clinet nodes. To do so login to your client node and follow the below steps.

Fist copy the key from '/root/.ssh/id_rsa.pub' file and save it on the client node within the home directory of your server or any other user you wish to authenticate.

[root@centos-7 .ssh]# cat id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAPNRNS/BVcT7XqHDuSvd8oncAjbNl2ZiYiU22MCNFKN8p/cgcblOZoZS0gjlQHpQLR1wm5hVu7PcxY/JAFX2phKyeZ+dbfQlAQ7HLRaaXWfuonelWgTCcs530bSg6XP3MTDRWjW0ZEFTLaOqVz+Yq2nUP3xRYmRKYNq2PhPRrkoBxnDGlmAsgGDm4gWz2TGE59uYHuXvY2Ys4OPeMFHAp0blR5nJIfVF40RB4uH0U79pp19qZ0vbghEvYUiyD4NMjqG13Ba4YYBQQIphe4GA3OTjBvjVmnmBCWZyDOcO+bWWyyKpabEEZOga3KnsoTw4iQ+d+iUyhPTZOvXaoOFUmrFQo5wWG229/GMJnYe1Qv0D3K3CcAQ== root@centos-7

[root@node2 ~]# vi .ssh/authorized_keys

ssh-rsa AAAAB3NzaC1yc2EAPNRNS/BVcT7XqHDuSvd8oncAjbNl2ZiYiU22MCNFKN8p/cgcblOZoZS0gjlQHpQLR1wm5hVu7PcxY/JAFX2phKyeZ+dbfQlAQ7HLRaaXWfuonelWgTCcs530bSg6XP3MTDRWjW0ZEFTLaOqVz+Yq2nUP3xRYmRKYNq2PhPRrkoBxnDGlmAsgGDm4gWz2TGE59uYHuXvY2Ys4OPeMFHAp0blR5nJIfVF40RB4uH0U79pp19qZ0vbghEvYUiyD4NMjqG13Ba4YYBQQIphe4GA3OTjBvjVmnmBCWZyDOcO+bWWyyKpabEEZOga3KnsoTw4iQ+d+iUyhPTZOvXaoOFUmrFQo5wWG229/GMJnYe1Qv0D3K3CcAQ== root@centos-7

Save and quit file and you can access your both client nodes from controlling server without asking for root password.

[root@centos-7 ~]# ssh -p 2178 root@node1_ip

[root@centos-7 ~]# ssh -p 2178 root@node2_ip

[root@centos-7 .ssh]# ssh -p 2178 root@72.25.10.83
Authorized uses only. All activity may be \ monitored and reported.
Last login: Sun Mar 27 21:42:09 2016 from 12.1.0.90

[root@node1 ~]# exit
logout
Connection to 72.25.10.83 closed.

[root@centos-7 .ssh]# ssh -p 2178 root@72.25.10.84
The authenticity of host '[72.25.10.84]:2178 ([72.25.10.84]:2178)' can't be established.
ECDSA key fingerprint is 49:8a:3c:85:55:61:79:1d:1f:21:33:s1:s1:fd:g0:53.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[72.25.10.84]:2178' (ECDSA) to the list of known hosts.
Authorized uses only. All activity may be \ monitored and reported.
Last login: Sun Mar 27 22:03:56 2016 from 12.1.0.90
[root@node2 ~]#
[root@node2 ~]# exit
logout
Connection to 72.25.10.84 closed.

Creating Inventory of Remote Hosts

After setting up the SSH keys authentication between the Ansible server and its client nodes, now are going to configure those remote hosts on the Ansible controller server by editing the '/etc/ansible/hosts' file. This file holds the inventory of remote hosts to which Ansible needs to connect through SSH for managing the systems.

Open the file using any of your editor to to configure it.

[root@centos-7 ~]# vim /etc/ansible/hosts

Hosts Inventry

Here in the configuration file we have configure both client node to use port '2178', if you using the default ssh port then you will simply put your host IP address.

After saving the file lets run the following ansible command with options '-m' for module to verify the connectivity from from Ansible server to remote servers.

# ansible -m ping 72.25.10.83
# ansible -m ping 72.25.10.73

Connectivinity test

You can also use belo command to ping all of your configured hosts.

[root@centos-7 ~]# ansible all -m ping
72.25.10.83 | success >> {
"changed": false,
"ping": "pong"
}

72.25.10.73 | success >> {
"changed": false,
"ping": "pong"
}

Executing Remote Commands

In the above examples we've just used ping module to ping the remote hosts. There are various module available to execute commands on remote hosts. Now we will use 'command' module with 'ansible' command to get remote machine information like systems hostname information, free disk space and uptime as shown.

# ansible -m command -a 'hostnamectl' 72.25.10.83
# ansible -m command -a 'df -h' 72.25.10.83
# ansible -m command -a 'uptime' 72.25.10.83

Ansible remote commands

Similarly you can run many shell commands using ansible on the single client host as well as on the group of your similar hosts like if you have a configured a 'web-servers' group in your ansible host inventory file then you you will run the command like this.

# ansible -m command -a "uptime" web-servers

Creating Playbooks in Ansible

Playbooks are Ansible’s configuration management scripts used to manage configurations and deployments to remote machines. Playbooks contain set of policies that you want your remote systems to be implemented.

Let's create your first play book with name of file as 'httpd.yml', then we will configure a host to run an apache web server. Here you will choose the configurations to which machines in your infrastructure to target and what remote user to complete the tasks as shown in the configuration file.

[root@centos-7 ~]# vi httpd.yml

---
- hosts: 72.25.10.83
remote_user: root
tasks:
- name: Installing Latest version of Apache
yum: pkg=httpd state=latest
- name: Copying the demo file
template: src=/etc/ansible/index.html dest=/var/www/html
owner=apache group=apache mode=0644
- name: (Enable it on System Boot)
service: name=httpd enabled=yes
notify:
- start apache
handlers:
- name: start apache
service: name=httpd state=started

Save and close the file and then create a demo html file that will be placed in the default Document Root of remote hosts.

[root@centos-7 ~]# vi /etc/ansible/index.html

Installing Apache by Ansible

Apache Web Server is installed by Ansible

Congratulations, Apache is managed through Ansible

 

Understanding Playbook Configurations

As we have created our first play book, now its imporatnt to understand that how it works. All YAML files should begin with (Three dashes) '---', that indicates the start of a document. Then the hosts line is a list of one or more groups or host patterns separated by colons. You can mention remote user account along with host.

---
- hosts: 72.25.10.83
remote_user: root

Then we have set of tasks, where each play contains a list of tasks, those are executed in order, one at a time, against all machines matched by the host pattern, before moving on to the next task.

tasks:
- name: Installing Latest version of Apache
yum: pkg=httpd state=latest
- name: Copying the demo file
template: src=/etc/ansible/index.html dest=/var/www/html
owner=apache group=apache mode=0644
- name: (Enable it on System Boot)
service: name=httpd enabled=yes

Every task should have a name, which is included in the output. This is output for us, so it is nice to have reasonably good descriptions of each task step. So, our First task will install latest version of apache, second will copy the demo html (/etc/ansible/index.html) to /var/www/html directory of remote hosts and third one will enable auto-start of apache service during system boot.

After that ‘notify’ actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks.

notify:
- start apache

The 'notify' item contains an item called “start apache”.This is a reference to a handler, which can perform certain functions when it is called from within a task. We will define the “start apache” handler below.

handlers:
- name: start apache
service: name=httpd state=started

Handlers are lists of tasks that only run when they have been told by a task that changes have occurred on the client system. As we have a handler that starts apache service after the package is installed.

Running Playbook in Ansible

After setting up your playbook configuration, you can run your playbook using below command.

# ansible-playbook -l 72.25.10.83 httpd.yml

running playbook

After this open your browser and navigate to the IP address of your remote host mentioned in ansible inventory.

http://your_client_node_ip/

Apache with Ansible

So, if you get the above page the i means that you have successfully installed apache with Ansible playbook. Like the same way you can create many playbooks to install your complex applications on multiple hosts.

Conclusion

Ansible is quite interesting and very easy, light weight. Easily can get up and running in 5 min. So,You have successfully installed Ansible on CentOS 7 and learnt its basic usage to create a simple playbook for automation of apache installation. Hope you have find this much helpful in automation of your tasks.

The post How to Setup Ansible Automation Tool in CentOS 7 appeared first on LinOxide.

How to Setup ProjectSend File sharing Tool on CentOS 7

$
0
0

ProjectSend is an Open Source web file and image sharing tool for professionals that solve the issue of sharing files between a company and its clients. ProjectSend provides an easy and secure multi-file uploading and unlimited file size on ANY server! Even on common hostings shared accounts.It is basically a clients-oriented file uploading utility where the clients are created and assigned a username and a password. Then you can upload as much files as you want under each account, with the ability to add a title and description to each one. When the client logs in, you will see a web page that contains your company logo, and a sortable list of every file uploaded under your name, with description, time, date, etc. It also works as a history of "sent" files. You can check the differences between versions, the time that it took to do that, and so on. Additional benefits of using ProjectSend include saving hundreds of mb. on email accounts since every file remains on your server until you decide to delete it, and they can be accessed from any browser anywhere.

Let's follow the instructions to install and use ProjectSend on CentOS 7 server with LAMP stack.

1) System Update

Connect to your Linux CentOS 7 server using your root user credentials and after setting up the fully qualified domain name of your server, run the following command to update/upgrade your server with updates, security patches and latest kernel release.

# yum -y upgrade

2) LAMP Setup

Now you have an updated system ready for the installation of required packages for ProjectSend application setup. You need to setup the LAMP (Linux Apache MySQL PHP) stack as a prerequisite of ProjectSend.

Installing Apache
Run the following command to install Apache Web server on CentOS 7.

# yum install httpd openssl mod_ssl

Apache Installation

Once installed, start its services and enable it start at boot.

# systemctl start httpd

# systemctl enable httpd

You can verify by opening your favorite web browser and entering the IP address of your server in the URL, you should get a “Testing 123″ page .

Installing MySQL-MariaDB

MariaDB is a replacement for MySQL, that is a robust, scalable and reliable SQL server that comes rich set of enhancements. We will be using 'yum' command to install MariaDB as shown.

# yum install mariadb mariadb-server

MariaDB Installation

To start and enable MariaDB active services on your system run the following commands.

# systemctl enable mariadb
# systemctl start mariadb

Start MariaDB

By default, MariaDB is not hardened. You can secure MariaDB using the 'mysql_secure_installation' script by choosing the appropriate options as shown .

# mysql_secure_installation

/usr/bin/mysql_secure_installation: line 379: find_mysql_client: command not found

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Secure MariaDB

Installing PHP and its Modules

Run the command below to install PHP along with its necessary modules required for PrejectSend on CentOS 7.

]# yum install php php-mysql php-gd php-ldap php-odbc php-pear php-xml php-xmlrpc php-mbstring php-snmp php-mcrypt

PHP and its modules

3) Installing ProjectSend

After completing the LAMP installation setup , now we are moving towards the installation of ProjectSend Application on our CentOS 7 Server. To download its package go to the ProjectSend Download Page.

Download Projectsend

You can also get its packages using 'wget' utility command on your server and then extract it with 'unzip' command. Make sure that you have 'wget' and 'unzip' package installed on your server to run below commands.

# wget https://github.com/ignacionelson/ProjectSend/archive/master.zip

# unzip master.zip

wget ProjectSend

Now move the ProjectSend archive to the document root directory of your web server using below command.

# mv ProjectSend-master/ /var/www/html/projectsend

Change the ownership of 'projectsend' folder with apache using command below.

# chown apache: -R /var/www/html/projectsend

4) Setup DB for ProjectSend

In this step we are going to log in to the MariaDB console and create a database for the ProjectSend by running the following commands and providing the root user credentials that we had setup earlier.

# mysql -u root -p

> CREATE DATABASE psdb;
> GRANT ALL PRIVILEGES ON psdb.* TO 'psuser'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
> FLUSH PRIVILEGES;
> exit;

ProjectSend DB

5) ProjectSend Configurations

In order to configure your ProjectSend configurations, you need to rename the ‘sys.config.sample.php’ file to ‘sys.config.php’ into the following directory with 'mv' command. Then open this in your editor to configure its parameters.

# cd /var/www/html/projectsend/includes

# mv sys.config.sample.php sys.config.php

# vi sys.config.php

ProjectSend Configurations

Change the configurations to match your database settings and the close file after saving changes.

/**
* Enter your database connection information here
* If you have doubts about this values, consult your web hosting provider.
*/

/** MySQL database name */
define('DB_NAME', 'database');

/** Database host (in most cases it's localhost) */
define('DB_HOST', 'localhost');

/** MySQL username (must be assigned to the database) */
define('DB_USER', 'username');

/** MySQL password */
define('DB_PASSWORD', 'password');

/**
* Prefix for the tables. Set to something other than tbl_ for increased
* security onr in case you want more than 1 installations on the same database.
*/
define('TABLES_PREFIX', 'tbl_');

/*

6) Apache WebServer Configurations

Configure your default configuration file of Apache web server according to your current document root directory where you have placed its setup by opening file in your editor.

# vim /etc/httpd/conf/httpd.conf

DocumentRoot "/var/www/html/projectsend"
# Relax access to content within /var/www.
<Directory "/var/www/html">
AllowOverride None
# Allow open access:
Require all granted

# Further relax access to the default document root:
<Directory "/var/www/html/projectsend">

Save and close the file and restart your Apache and MariaDB services with below commands.

# systemctl restart httpd
# systemctl restart mariadb

7) Firewall and SELinux

Our installation is almost done, now before accessing the ProjectSend in the web browser make usre to allow the respective services/ports allowed in your firewall. Let's run the following commands to open below ports in firewall of your system.

# firewall-cmd --permanent --zone=public --add-service=http
# firewall-cmd --reload

Change the SELinux state to Permissive mode for th time being with following command later on you can configure its policy if required.

# setenforce 0

8) ProjectSend Web Access

Now its time open your web browser ans access the web console of ProjectSend using your FQDN or Server's IP address on default port '80'.

http://your_servers_ip/

Configure the Basic System and default system administration options, then click on the Install button to proceed.

ProjectSend Web settings

Once, everything is fine you will get the below window to congratulate upon successful installation of ProjectSend.

Project Send Installed

9) Using ProjectSend

After basic system settings of ProjectSend Web, let login using your admin username and password to start using ProjectSend File sharing application.

ProjectSend Login

Welcome to the ProjectSend dashboard, Here you can see the stats about all of your file and images .

ProjectSend Dashboard

Now, in order start uploading your files and share with your clients, fisrt you need to add your client and then click on the File bar and choose the 'upload button from the drop down to add add and then upload your files.

Uploading files

After uploading your files, you can choose the particular client to whom you want to share. To check the status and manage your uploaded files click on the 'Manage Files' option under the Files bar as shown below.

Managing files

Conclusion

Thank for reading this post and let's start using it to enjoy the awesome features of ProjectSend.You can create new users as many you want, upload files, create groups etc from this Dashboard. Hope you have enjoyed this, don't forget to share your comments and suggestions.

The post How to Setup ProjectSend File sharing Tool on CentOS 7 appeared first on LinOxide.

How to Secure CentOS 7 Server with ModSecurity

$
0
0

ModSecurity is an open source web application firewall which enables web application defenders to gain visibility into HTTP traffic and provides powerful rule sets to enhance high security and protection. It provides a full package with real-time web monitoring, logging and access control. The rule sets can be customized and managed according to the user preferences. The freedom to choose what to do is an essential advantage of ModSecurity and really adds to the context of an open source. With full access to the source code, we've the ability to customize and extend the tool to fit our needs.

In this article, I'm explaining how to install and configure ModSecurity on a CentOS 7 server. Let's walk through the installation steps.

First of all, I would like to verify the server settings, mainly the present Apache version and the modules installed.

[root@server1 ~]# httpd -V
Server version: Apache/2.4.6 (CentOS)
Server built: Nov 19 2015 21:43:13
Server's Module Magic Number: 20120211:24
Server loaded: APR 1.4.8, APR-UTIL 1.5.2
Compiled using: APR 1.4.8, APR-UTIL 1.5.2
Architecture: 64-bit
Server MPM: prefork
threaded: no
forked: yes (variable process count)
Server compiled with....
-D APR_HAS_SENDFILE
-D APR_HAS_MMAP
-D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
-D APR_USE_SYSVSEM_SERIALIZE
-D APR_USE_PTHREAD_SERIALIZE
-D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
-D APR_HAS_OTHER_CHILD
-D AP_HAVE_RELIABLE_PIPED_LOGS
-D DYNAMIC_MODULE_LIMIT=256
-D HTTPD_ROOT="/etc/httpd"
-D SUEXEC_BIN="/usr/sbin/suexec"
-D DEFAULT_PIDLOG="/run/httpd/httpd.pid"
-D DEFAULT_SCOREBOARD="logs/apache_runtime_status"
-D DEFAULT_ERRORLOG="logs/error_log"
-D AP_TYPES_CONFIG_FILE="conf/mime.types"
-D SERVER_CONFIG_FILE="conf/httpd.conf"

You can use these command to identify the dynamically compiled modules enabled with Apache.

[root@server1 ~]# httpd -M
Loaded Modules:
core_module (static)
so_module (static)
http_module (static)
access_compat_module (shared)
actions_module (shared)
alias_module (shared)
allowmethods_module (shared)
auth_basic_module (shared)
auth_digest_module (shared)
authn_anon_module (shared)
authn_core_module (shared)
authn_dbd_module (shared)
authn_dbm_module (shared)
authn_file_module (shared)
authn_socache_module (shared)
authz_core_module (shared)
authz_dbd_module (shared)
authz_dbm_module (shared)
authz_groupfile_module (shared)
authz_host_module (shared)
authz_owner_module (shared)
authz_user_module (shared)
autoindex_module (shared)
cache_module (shared)
cache_disk_module (shared)
data_module (shared)
dbd_module (shared)
deflate_module (shared)
dir_module (shared)
dumpio_module (shared)
echo_module (shared)
env_module (shared)
expires_module (shared)
ext_filter_module (shared)
filter_module (shared)
headers_module (shared)
include_module (shared)
info_module (shared)
log_config_module (shared)
logio_module (shared)
mime_magic_module (shared)
mime_module (shared)
negotiation_module (shared)
remoteip_module (shared)
reqtimeout_module (shared)
rewrite_module (shared)
setenvif_module (shared)
slotmem_plain_module (shared)
slotmem_shm_module (shared)
socache_dbm_module (shared)
socache_memcache_module (shared)
socache_shmcb_module (shared)
status_module (shared)
substitute_module (shared)
suexec_module (shared)
unique_id_module (shared)
unixd_module (shared)
userdir_module (shared)
version_module (shared)
vhost_alias_module (shared)
dav_module (shared)
dav_fs_module (shared)
dav_lock_module (shared)
lua_module (shared)
mpm_prefork_module (shared)
proxy_module (shared)
lbmethod_bybusyness_module (shared)
lbmethod_byrequests_module (shared)
lbmethod_bytraffic_module (shared)
lbmethod_heartbeat_module (shared)
proxy_ajp_module (shared)
proxy_balancer_module (shared)
proxy_connect_module (shared)
proxy_express_module (shared)
proxy_fcgi_module (shared)
proxy_fdpass_module (shared)
proxy_ftp_module (shared)
proxy_http_module (shared)
proxy_scgi_module (shared)
proxy_wstunnel_module (shared)
systemd_module (shared)
cgi_module (shared)
php5_module (shared)

Installation

Once verifying the Apache setup, you can install ModSecurity package from the  CentOS base repo.

[root@server1 yum.repos.d]# yum install mod_security -y
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.metrocast.net
* extras: mirror.metrocast.net
* updates: mirror.metrocast.net
Resolving Dependencies
--> Running transaction check
---> Package mod_security.x86_64 0:2.7.3-5.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===============================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================
Installing:
mod_security x86_64 2.7.3-5.el7 base 177 k

Transaction Summary
===============================================================================================================================================
Install 1 Package

This will install the mod_security on your server. Now we need to configure it on our server.

Check and confirm the integration of the module to Apache

Check for the configuration file generated with the default set of rules. The configuration file will be located inside the Apache custom modules folder "/etc/httpd/conf.d/".

[root@server1 conf.d]# pwd
/etc/httpd/conf.d
[root@server1 conf.d]# ll mod_security.conf
-rw-r--r-- 1 root root 2139 Jun 10 2014 mod_security.conf

[root@server1 conf.d]# httpd -M | grep security
security2_module (shared)

Now restart the Apache and verify whether the Mod_security module is loaded on restart in the Apache logs.

[root@server1 conf.d]# tail -f /etc/httpd/logs/error_log
Mon Apr 18 06:24:35.170359 2016] [suexec:notice] [pid 2819] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Mon Apr 18 06:24:35.170461 2016] [:notice] [pid 2819] ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/) configured.
[Mon Apr 18 06:24:35.170469 2016] [:notice] [pid 2819] ModSecurity: APR compiled version="1.4.8"; loaded version="1.4.8"
[Mon Apr 18 06:24:35.170476 2016] [:notice] [pid 2819] ModSecurity: PCRE compiled version="8.32 "; loaded version="8.32 2012-11-30"
[Mon Apr 18 06:24:35.170483 2016] [:notice] [pid 2819] ModSecurity: LUA compiled version="Lua 5.1"
[Mon Apr 18 06:24:35.170488 2016] [:notice] [pid 2819] ModSecurity: LIBXML compiled version="2.9.1"
[Mon Apr 18 06:24:35.451568 2016] [auth_digest:notice] [pid 2819] AH01757: generating secret for digest authentication ...
[Mon Apr 18 06:24:35.452305 2016] [lbmethod_heartbeat:notice] [pid 2819] AH02282: No slotmem from mod_heartmonitor
[Mon Apr 18 06:24:35.501101 2016] [mpm_prefork:notice] [pid 2819] AH00163: Apache/2.4.6 (CentOS) PHP/5.4.16 configured -- resuming normal operations

From the logs, you can identify the ModSecurity version loaded and other details.

Identifying the Nature

We need to go through the ModSecurity configuration file to identify the include path for the custom rules which we can add for customization and also identify the log file path for further analysis.

We can add the custom rules inside this path according to the configuration.

# ModSecurity Core Rules Set configuration
IncludeOptional modsecurity.d/*.conf
IncludeOptional modsecurity.d/activated_rules/*.conf

[root@server1 modsecurity.d]# pwd
/etc/httpd/modsecurity.d
[root@server1 modsecurity.d]# ll
total 4
drwxr-xr-x 2 root root 4096 Jun 10 2014 activated_rules

And we can inspect the log file at /var/log/httpd/modsec_audit.log

Customizing ModSecurity with the Core rule sets

We can get the custom rule sets from the official repo. These rule sets are automatically symlinked to the activated rules and make it effective on install by default.

root@server1 conf.d]# yum -y install mod_security_crs
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.metrocast.net
* extras: mirror.metrocast.net
* updates: mirror.metrocast.net
Resolving Dependencies
--> Running transaction check
---> Package mod_security_crs.noarch 0:2.2.6-6.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===============================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================
Installing:
mod_security_crs noarch 2.2.6-6.el7 base 90 k

These are the general core rule sets installed from the official repo file. We need to modify certain rules to prevent  from blocking the legitimate server requests.

[root@server1 base_rules]# ll
total 332
-rw-r--r-- 1 root root 1980 Jun 9 2014 modsecurity_35_bad_robots.data
-rw-r--r-- 1 root root 386 Jun 9 2014 modsecurity_35_scanners.data
-rw-r--r-- 1 root root 3928 Jun 9 2014 modsecurity_40_generic_attacks.data
-rw-r--r-- 1 root root 2610 Jun 9 2014 modsecurity_41_sql_injection_attacks.data
-rw-r--r-- 1 root root 2224 Jun 9 2014 modsecurity_50_outbound.data
-rw-r--r-- 1 root root 56714 Jun 9 2014 modsecurity_50_outbound_malware.data
-rw-r--r-- 1 root root 22861 Jun 9 2014 modsecurity_crs_20_protocol_violations.conf
-rw-r--r-- 1 root root 6915 Jun 9 2014 modsecurity_crs_21_protocol_anomalies.conf
-rw-r--r-- 1 root root 3792 Jun 9 2014 modsecurity_crs_23_request_limits.conf
-rw-r--r-- 1 root root 6933 Jun 9 2014 modsecurity_crs_30_http_policy.conf
-rw-r--r-- 1 root root 5394 Jun 9 2014 modsecurity_crs_35_bad_robots.conf
-rw-r--r-- 1 root root 19157 Jun 9 2014 modsecurity_crs_40_generic_attacks.conf
-rw-r--r-- 1 root root 43961 Jun 9 2014 modsecurity_crs_41_sql_injection_attacks.conf
-rw-r--r-- 1 root root 87470 Jun 9 2014 modsecurity_crs_41_xss_attacks.conf
-rw-r--r-- 1 root root 1795 Jun 9 2014 modsecurity_crs_42_tight_security.conf
-rw-r--r-- 1 root root 3660 Jun 9 2014 modsecurity_crs_45_trojans.conf
-rw-r--r-- 1 root root 2253 Jun 9 2014 modsecurity_crs_47_common_exceptions.conf
-rw-r--r-- 1 root root 2787 Jun 9 2014 modsecurity_crs_48_local_exceptions.conf.example
-rw-r--r-- 1 root root 1835 Jun 9 2014 modsecurity_crs_49_inbound_blocking.conf
-rw-r--r-- 1 root root 22314 Jun 9 2014 modsecurity_crs_50_outbound.conf
-rw-r--r-- 1 root root 1448 Jun 9 2014 modsecurity_crs_59_outbound_blocking.conf
-rw-r--r-- 1 root root 2674 Jun 9 2014 modsecurity_crs_60_correlation.conf
[root@server1 base_rules]# pwd
/usr/lib/modsecurity.d/base_rules

These rules are automatically symlinked to the activated rule set to enable by default on installation.

[root@server1 activated_rules]# ls
modsecurity_35_bad_robots.data modsecurity_crs_23_request_limits.conf modsecurity_crs_47_common_exceptions.conf
modsecurity_35_scanners.data modsecurity_crs_30_http_policy.conf modsecurity_crs_48_local_exceptions.conf.example
modsecurity_40_generic_attacks.data modsecurity_crs_35_bad_robots.conf modsecurity_crs_49_inbound_blocking.conf
modsecurity_41_sql_injection_attacks.data modsecurity_crs_40_generic_attacks.conf modsecurity_crs_50_outbound.conf
modsecurity_50_outbound.data modsecurity_crs_41_sql_injection_attacks.conf modsecurity_crs_59_outbound_blocking.conf
modsecurity_50_outbound_malware.data modsecurity_crs_41_xss_attacks.conf modsecurity_crs_60_correlation.conf
modsecurity_crs_20_protocol_violations.conf modsecurity_crs_42_tight_security.conf
modsecurity_crs_21_protocol_anomalies.conf modsecurity_crs_45_trojans.conf
[root@server1 activated_rules]# pwd
/etc/httpd/modsecurity.d/activated_rules

 

We can even customize your ModSecurity by choosing the rule set from OWASP CRS.

OWASP ModSecurity CRS provides a set of generic attack detection rules to ensure baselevel protection for the Web Applications. We can make it more complex as per our security needs. OWASP CRS too provides protections in the following categories:

HTTP Protection
Real-time Blacklist Lookups
DDOS Attacks
Common Web Attacks Protection
Automation Detection - Detecting bots, crawlers, scanners and other surface malicious activity.
Detects Malicious File uploads via Web with AV Scanning
Tracking Sensitive Data - Tracks Credit Card usage and blocks leakages.
Trojan Protection
Identification of Application Defects
Error Detection and Hiding
You can refer this guide OWASP CRS directives to configure our own rule sets.

To install OWASP CRS rule set instead of the default generic rules from the official repo. You can download the OWASP CRS and copy the configuration file and rule sets to the /etc/httpd/modsecurity.d/ folder.

Before enabling OWASP Core Rule set, you can remove modsecurity_crs which is enabled from the repo.

Now go to the /usr/local/src folder and download the repo file from  OWASP CRS download.

[root@server1 src]# wget https://github.com/SpiderLabs/owasp-modsecurity-crs/zipball/master
--2016-04-18 08:28:01-- https://github.com/SpiderLabs/owasp-modsecurity-crs/zipball/master
Resolving github.com (github.com)... 192.30.252.131
Connecting to github.com (github.com)|192.30.252.131|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://codeload.github.com/SpiderLabs/owasp-modsecurity-crs/legacy.zip/master [following]
--2016-04-18 08:28:01-- https://codeload.github.com/SpiderLabs/owasp-modsecurity-crs/legacy.zip/master
Resolving codeload.github.com (codeload.github.com)... 192.30.252.161
Connecting to codeload.github.com (codeload.github.com)|192.30.252.161|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/zip]
Saving to: ‘master’

[ <=> ] 3,43,983 --.-K/s in 0.04s

2016-04-18 08:28:02 (7.68 MB/s) - ‘master’ saved [343983]

Since the downloaded file is a zip file, extract the file for the contents.

[root@server1 src]# file master
master: Zip archive data, at least v1.0 to extract

root@server1 src]# unzip master

root@server1 src]# ls
master SpiderLabs-owasp-modsecurity-crs-f16e0b1

Once the files are downloaded, copy the crs configuration file and the base rule set to the location /etc/httpd/modsecurity.d/

[root@server1 modsecurity.d]# cp -rp /usr/local/src/SpiderLabs-owasp-modsecurity-crs-f16e0b1/modsecurity_crs_10_setup.conf.example .
[root@server1 modsecurity.d]# ls
activated_rules modsecurity_crs_10_setup.conf.example
[root@server1 modsecurity.d]# mv modsecurity_crs_10_setup.conf.example modsecurity_crs_10_setup.conf

Now change the directory to the activated_rules folder and copy the base rules from the installation to that.

[root@server1 modsecurity.d]# cd activated_rules/

[root@server1 activated_rules]# cp -rp /usr/local/src/SpiderLabs-owasp-modsecurity-crs-f16e0b1/base_rules/* .
[root@server1 activated_rules]# ll
total 344
-rw-r--r-- 1 root root 1969 Apr 14 08:49 modsecurity_35_bad_robots.data
-rw-r--r-- 1 root root 386 Apr 14 08:49 modsecurity_35_scanners.data
-rw-r--r-- 1 root root 3928 Apr 14 08:49 modsecurity_40_generic_attacks.data
-rw-r--r-- 1 root root 2224 Apr 14 08:49 modsecurity_50_outbound.data
-rw-r--r-- 1 root root 56714 Apr 14 08:49 modsecurity_50_outbound_malware.data
-rw-r--r-- 1 root root 23038 Apr 14 08:49 modsecurity_crs_20_protocol_violations.conf
-rw-r--r-- 1 root root 8107 Apr 14 08:49 modsecurity_crs_21_protocol_anomalies.conf
-rw-r--r-- 1 root root 3792 Apr 14 08:49 modsecurity_crs_23_request_limits.conf
-rw-r--r-- 1 root root 6907 Apr 14 08:49 modsecurity_crs_30_http_policy.conf
-rw-r--r-- 1 root root 5410 Apr 14 08:49 modsecurity_crs_35_bad_robots.conf
-rw-r--r-- 1 root root 20881 Apr 14 08:49 modsecurity_crs_40_generic_attacks.conf
-rw-r--r-- 1 root root 44677 Apr 14 08:49 modsecurity_crs_41_sql_injection_attacks.conf
-rw-r--r-- 1 root root 99654 Apr 14 08:49 modsecurity_crs_41_xss_attacks.conf
-rw-r--r-- 1 root root 1795 Apr 14 08:49 modsecurity_crs_42_tight_security.conf
-rw-r--r-- 1 root root 3660 Apr 14 08:49 modsecurity_crs_45_trojans.conf
-rw-r--r-- 1 root root 2247 Apr 14 08:49 modsecurity_crs_47_common_exceptions.conf
-rw-r--r-- 1 root root 2787 Apr 14 08:49 modsecurity_crs_48_local_exceptions.conf.example
-rw-r--r-- 1 root root 1838 Apr 14 08:49 modsecurity_crs_49_inbound_blocking.conf
-rw-r--r-- 1 root root 22328 Apr 14 08:49 modsecurity_crs_50_outbound.conf
-rw-r--r-- 1 root root 1448 Apr 14 08:49 modsecurity_crs_59_outbound_blocking.conf
-rw-r--r-- 1 root root 2674 Apr 14 08:49 modsecurity_crs_60_correlation.conf

Once the rules are copied, you can restart the Apache and confirm its status to make sure everything is configured correctly.

[root@server1 activated_rules]# systemctl status httpd
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled)
Active: active (running) since Mon 2016-04-18 08:35:13 UTC; 16s ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 3571 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
Main PID: 3576 (httpd)
Status: "Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec"
CGroup: /system.slice/httpd.service
├─3576 /usr/sbin/httpd -DFOREGROUND
├─3578 /usr/sbin/httpd -DFOREGROUND
├─3579 /usr/sbin/httpd -DFOREGROUND
├─3580 /usr/sbin/httpd -DFOREGROUND
├─3581 /usr/sbin/httpd -DFOREGROUND
└─3582 /usr/sbin/httpd -DFOREGROUND

Apr 18 08:35:12 server1.centos7-test.com systemd[1]: Starting The Apache HTTP Server...
Apr 18 08:35:13 server1.centos7-test.com systemd[1]: Started The Apache HTTP Server.
[root@server1 activated_rules]# tail -f /etc/httpd/logs/error_log
[Mon Apr 18 08:35:13.237779 2016] [suexec:notice] [pid 3576] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Mon Apr 18 08:35:13.237912 2016] [:notice] [pid 3576] ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/) configured.
[Mon Apr 18 08:35:13.237921 2016] [:notice] [pid 3576] ModSecurity: APR compiled version="1.4.8"; loaded version="1.4.8"
[Mon Apr 18 08:35:13.237929 2016] [:notice] [pid 3576] ModSecurity: PCRE compiled version="8.32 "; loaded version="8.32 2012-11-30"
[Mon Apr 18 08:35:13.237936 2016] [:notice] [pid 3576] ModSecurity: LUA compiled version="Lua 5.1"
[Mon Apr 18 08:35:13.237941 2016] [:notice] [pid 3576] ModSecurity: LIBXML compiled version="2.9.1"
[Mon Apr 18 08:35:13.441258 2016] [auth_digest:notice] [pid 3576] AH01757: generating secret for digest authentication ...
[Mon Apr 18 08:35:13.442048 2016] [lbmethod_heartbeat:notice] [pid 3576] AH02282: No slotmem from mod_heartmonitor
[Mon Apr 18 08:35:13.476079 2016] [mpm_prefork:notice] [pid 3576] AH00163: Apache/2.4.6 (CentOS) configured -- resuming normal operations
[Mon Apr 18 08:35:13.476135 2016] [core:notice] [pid 3576] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'

Ensuring the Server Security with ModSecurity

Now we can test the working of ModSecurity on our server. Just try to access any file from the server via browser. I just tried accessing the /etc/shadow file from the browser and it reported the forbidden error.

forbidden

 

Inspecting the details on the server from the ModSecurity logs (/var/log/httpd/modsec_audit.log). This is what is reported on the server end.

 

--ffddb332-A--
[19/Apr/2016:05:40:50 +0000] VxXE4nawj6tDGNi3ESgy8gAAAAM 101.63.70.47 60553 45.33.76.60 80
--ffddb332-B--
GET /etc/shadow HTTP/1.1
Host: 45.33.76.60
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:43.0) Gecko/20100101 Firefox/43.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: Drupal.toolbar.collapsed=0; SESS1dba846f2abd54265ae8178776146216=cVBBus2vUq_iMWD3mvj-0rM8ca21X1D7UrcVRzsmIZ8
Connection: keep-alive

--ffddb332-F--
HTTP/1.1 403 Forbidden
Content-Length: 212
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1

--ffddb332-E--

--ffddb332-H--
Message: Access denied with code 403 (phase 2). Pattern match "^[\\d.:]+$" at REQUEST_HEADERS:Host. [file "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_21_protocol_anomalies.conf"] [line "98"] [id "960017"] [rev "2"] [msg "Host header is a numeric IP address"] [data "45.33.76.60"] [severity "WARNING"] [ver "OWASP_CRS/2.2.9"] [maturity "9"] [accuracy "9"] [tag "OWASP_CRS/PROTOCOL_VIOLATION/IP_HOST"] [tag "WASCTC/WASC-21"] [tag "OWASP_TOP_10/A7"] [tag "PCI/6.5.10"] [tag "http://technet.microsoft.com/en-us/magazine/2005.01.hackerbasher.aspx"]
Action: Intercepted (phase 2)
Stopwatch: 1461044450304152 4953 (- - -)
Stopwatch2: 1461044450304152 4953; combined=735, p1=505, p2=135, p3=0, p4=0, p5=91, sr=158, sw=4, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.7.3 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache/2.4.6 (CentOS)
Engine-Mode: "ENABLED"

--ffddb332-Z--

This log clearly states the IP 101.63.70.47 details which was trying to download a file /etc/shadow from the server. And according to the logs, the HTTP reported forbidden error from the server. The details for this server response is also reported in the logs as it was denied as per the ModSecurity rule "/etc/httpd/modsecurity.d/activated_rules/modsecurity_crs_21_protocol_anomalies.conf". The server identified this web request to be a violation  of the Modsecurity rule specified and thus reported this error code.

Now you see how easy to install and configure the ModSecurity on CentOS 7.  ModSecurity, when properly configured, harden an Apache web server against several threats including DDoS attacks, SQL injections, Malicious attacks and should be considered in deployments exposed on the Internet.

I hope you enjoyed reading this article. Thank you for reading this :) I would recommend your valuable suggestions and recommendations on this.

The post How to Secure CentOS 7 Server with ModSecurity appeared first on LinOxide.

How to Build and Run your apps using Docker Compose

$
0
0

Docker compose, formerly known as Fig,  is a tool which can define and run complex applications using Docker. Basically it does the job of creating multiple containers and links between them.

Using Docker Compose requires defining the environment required for your app using a Dockerfile and the services required for the app in a '.yml' file. One can then use a single command to create and start all the services from the configuration file. In this writeup, we will learn how to install and use compose on Ubuntu 15.10. We will also touch upon how to run your app using it.

Installation

The prerequisite to installing Docker Compose is Docker Engine.  If you have not already installed Docker Engine, here are the quick steps for the same:

$ sudo apt-get update

$ sudo apt-get install linux-image-extra-$(uname -r)

$ sudo apt-get install docker-engine

To verify if the Docker installation was successful, let us run a hello-world program:

$sudo docker run hello-world

poornima@BNPTHKPD:~$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
03f4658f8b78: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:8be990ef2aeb16dbcb9271ddfe2610fa6658d13f6dfb8bc72074cc1ca36966a7
Status: Downloaded newer image for hello-world:latest

Hello from Docker.
This message shows that your installation appears to be working correctly.

.......

Now to install Compose, execute the following curl command in the terminal:

$ curl -L https://github.com/docker/compose/releases/download/1.7.0/docker-compose-`uname -s`-`uname -m`/usr/local/bin/docker-compose

Set executable permissions to the binary

$ chmod +x /usr/local/bin/docker-compose

Verify if the installation is complete:

$docker compose --version

root@BNPTHKPD:/home/poornima# docker-compose --version
docker-compose version 1.7.0, build 0d7bf73

Docker compose commands

Before we go ahead and learn how to run your applications using Compose, let me introduce you to some of its important commands.

Syntax for docker-compose command is

docker-compose [-f=...] [options] [commands] [args]

One can verify the version of docker-compose using:

docker-compose  -v, --version

When used with 'ps' command, it lists the available containers

docker-compose ps

You can create and start your containers using the 'up' command and adding '-d' option will run the containers in the background

docker-compose up -d

Listed below are some more commands that can be used with docker-compose:

build -  to build services

pause / unpause  -  pause and unpause services

stop -  stop services

restart - restart services

rm  - remove stopped containers

logs - view output from containers

help - to get help on a command

Building and Running your app

In this section, we will focus on building a simple Python web application and running it using Docker Compose.

Setup

Let us first create a directory for our app.

poornima@BNPTHKPD:~$ mkdir dockcompose
poornima@BNPTHKPD:~$ cd dockcompose/

We will now create a Hello World web application myapp.py using the Flask framework in this directory with the following contents:

from flask import Flask

app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello World using Docker Compose'

if __name__ == '__main__':
app.run(debug=True,host='0.0.0.0')

Create a requirements.txt file in the same project directory containing the following:

Flask==0.10.1

In the next step, we need to create a Docker image containing all the dependencies that the Python application needs.

Creating a Docker Image

For this, create a file called Dockerfile again in the project directory with the following contents:

FROM python:2.7

ADD . /code

WORKDIR /code

RUN pip install -r requirements.txt

CMD python myapp.py

This file informs Docker to build an image using Python 2.7, add the directory '.' into the path '/code' in the image,  set the working directory to /code,  install Python dependencies as mentioned in the requirements.txt file and set the default command of the container to 'python myapp.py'

We will now build the image.

$ docker build -t web .

poornima@BNPTHKPD:~/dockcompose$ sudo docker build -t web .
Sending build context to Docker daemon 5.632 kB
Step 1 : FROM python:2.7
---> a3b29970a425
Step 2 : ADD . /code
---> 855e1a126850
Removing intermediate container 2e713165c053
Step 3 : WORKDIR /code
---> Running in 431e3f52f421
---> 157b4cffd6df
Removing intermediate container 431e3f52f421
Step 4 : RUN pip install -r requirements.txt
---> Running in 07c294591a76
Collecting Flask==0.10.1 (from -r requirements.txt (line 1))
Downloading Flask-0.10.1.tar.gz (544kB)
Collecting Werkzeug>=0.7 (from Flask==0.10.1->-r requirements.txt (line 1))
Downloading Werkzeug-0.11.8-py2.py3-none-any.whl (306kB)
Collecting Jinja2>=2.4 (from Flask==0.10.1->-r requirements.txt (line 1))
Downloading Jinja2-2.8-py2.py3-none-any.whl (263kB)
Collecting itsdangerous>=0.21 (from Flask==0.10.1->-r requirements.txt (line 1))
Downloading itsdangerous-0.24.tar.gz (46kB)
Collecting MarkupSafe (from Jinja2>=2.4->Flask==0.10.1->-r requirements.txt (line 1))
Downloading MarkupSafe-0.23.tar.gz
Building wheels for collected packages: Flask, itsdangerous, MarkupSafe
Running setup.py bdist_wheel for Flask: started
Running setup.py bdist_wheel for Flask: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/d2/db/61/cb9b80526b8f3ba89248ec0a29d6da1bb6013681c930fca987
Running setup.py bdist_wheel for itsdangerous: started
Running setup.py bdist_wheel for itsdangerous: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/97/c0/b8/b37c320ff57e15f993ba0ac98013eee778920b4a7b3ebae3cf
Running setup.py bdist_wheel for MarkupSafe: started
Running setup.py bdist_wheel for MarkupSafe: finished with status 'done'
Stored in directory: /root/.cache/pip/wheels/94/a7/79/f79a998b64c1281cb99fa9bbd33cfc9b8b5775f438218d17a7
Successfully built Flask itsdangerous MarkupSafe
Installing collected packages: Werkzeug, MarkupSafe, Jinja2, itsdangerous, Flask
Successfully installed Flask-0.10.1 Jinja2-2.8 MarkupSafe-0.23 Werkzeug-0.11.8 itsdangerous-0.24
---> 9ef07d3ed698
Removing intermediate container 07c294591a76
Step 5 : CMD python myapp.py
---> Running in ac5ce91ddc85
---> 65d218cbea14
Removing intermediate container ac5ce91ddc85
Successfully built 65d218cbea14
poornima@BNPTHKPD:~/dockcompose$ sudo docker-compose up
Starting dockcompose_web_1
Attaching to dockcompose_web_1
web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
web_1 | * Restarting with stat
web_1 | * Debugger is active!
web_1 | * Debugger pin code: 151-559-328
web_1 | 172.17.0.1 - - [19/Apr/2016 08:17:43] "GET / HTTP/1.1" 200 -
web_1 | 172.17.0.1 - - [19/Apr/2016 08:17:43] "GET /favicon.ico HTTP/1.1" 404 -

This command can automatically locate the required Dockerfile, requirements.txt and myapp.py files and builds the image 'web' by using them.

Services files

Different services to be used in your app should be put together in a file called docker-compose.yml. The service file for our python HelloWorld app looks like the following:

web:
build: .
ports:
- "5000:5000"

The above file is defining the web service which builds from the current directory and forwards the container's exposed port 5000 to port 5000 on the host.

Running the app

Now let's go to the project directory and run the app:

$docker-compose up

In order to view the app running, point your web browser to the following:

http://0.0.0.0:5000

Here is how your browser displays the message for you:

Running an app using Docker Compose

More commands

Let us try out some commands to see how they work.

Note: All the docker compose commands should be run from the directory that contains docker-compose.yml

To run your services in the background, use '-d' option.

poornima@BNPTHKPD:~/dockcompose$ sudo docker-compose up -d
Starting dockcompose_web_1

To list all the containers, use the 'ps' option.

poornima@BNPTHKPD:~/dockcompose$ sudo docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------
dockcompose_web_ /bin/sh -c Up 0.0.0.0:5000->50
1 python myapp.py 00/tcp

To take a look at the environment variables available in the configured service (in this case, web), use the 'env' option.

poornima@BNPTHKPD:~/dockcompose$ sudo docker-compose run web env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d5c2b9eeab7f
TERM=xterm
WEB_PORT=tcp://172.17.0.2:5000
WEB_PORT_5000_TCP=tcp://172.17.0.2:5000
WEB_PORT_5000_TCP_ADDR=172.17.0.2
WEB_PORT_5000_TCP_PORT=5000
WEB_PORT_5000_TCP_PROTO=tcp
WEB_NAME=/dockcompose_web_run_1/web
WEB_ENV_LANG=C.UTF-8
WEB_ENV_GPG_KEY=C01E1CAD5EA2C4F0B8E3571504C367C218ADD4FF
WEB_ENV_PYTHON_VERSION=2.7.11
WEB_ENV_PYTHON_PIP_VERSION=8.1.1
WEB_1_PORT=tcp://172.17.0.2:5000
WEB_1_PORT_5000_TCP=tcp://172.17.0.2:5000
WEB_1_PORT_5000_TCP_ADDR=172.17.0.2
WEB_1_PORT_5000_TCP_PORT=5000
WEB_1_PORT_5000_TCP_PROTO=tcp
WEB_1_NAME=/dockcompose_web_run_1/web_1
WEB_1_ENV_LANG=C.UTF-8
WEB_1_ENV_GPG_KEY=C01E1CAD5EA2C4F0B8E3571504C367C218ADD4FF
WEB_1_ENV_PYTHON_VERSION=2.7.11
WEB_1_ENV_PYTHON_PIP_VERSION=8.1.1
DOCKCOMPOSE_WEB_1_PORT=tcp://172.17.0.2:5000
DOCKCOMPOSE_WEB_1_PORT_5000_TCP=tcp://172.17.0.2:5000
DOCKCOMPOSE_WEB_1_PORT_5000_TCP_ADDR=172.17.0.2
DOCKCOMPOSE_WEB_1_PORT_5000_TCP_PORT=5000
DOCKCOMPOSE_WEB_1_PORT_5000_TCP_PROTO=tcp
DOCKCOMPOSE_WEB_1_NAME=/dockcompose_web_run_1/dockcompose_web_1
DOCKCOMPOSE_WEB_1_ENV_LANG=C.UTF-8
DOCKCOMPOSE_WEB_1_ENV_GPG_KEY=C01E1CAD5EA2C4F0B8E3571504C367C218ADD4FF
DOCKCOMPOSE_WEB_1_ENV_PYTHON_VERSION=2.7.11
DOCKCOMPOSE_WEB_1_ENV_PYTHON_PIP_VERSION=8.1.1
LANG=C.UTF-8
GPG_KEY=C01E1CAD5EA2C4F0B8E3571504C367C218ADD4FF
PYTHON_VERSION=2.7.11
PYTHON_PIP_VERSION=8.1.1
HOME=/root

In order to stop containers, use the 'stop' option:

poornima@BNPTHKPD:~/dockcompose$ sudo docker-compose stop
Stopping dockcompose_web_1 ... done

You can pause and unpause them using the 'pause' and 'unpause' option.

poornima@BNPTHKPD:~/dockcompose$ sudo docker-compose pause
Pausing dockcompose_web_1 ... done

poornima@BNPTHKPD:~/dockcompose$ sudo docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------
dockcompose_web_ /bin/sh -c Paused 0.0.0.0:5000->50
1 python myapp.py 00/tcp

poornima@BNPTHKPD:~/dockcompose$ sudo docker-compose unpause
Unpausing dockcompose_web_1 ... done

Use the 'kill' command to stop all containers in one go.

$ sudo docker-compose kill

Compose logs can be reviewed using 'log' command.

$ sudo docker-compose logs

If one needs help to list all the available commands / options, use the help command.

poornima@BNPTHKPD:~/dockcompose$ sudo docker-compose --help
Define and run multi-container applications with Docker.

Usage:
docker-compose [-f=...] [options] [COMMAND] [ARGS...]
docker-compose -h|--help

Options:
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name (default: directory name)
--verbose Show more output
-v, --version Print version and exit
-H, --host HOST Daemon socket to connect to

--tls Use TLS; implied by --tlsverify
--tlscacert CA_PATH Trust certs signed only by this CA
--tlscert CLIENT_CERT_PATH Path to TLS certificate file
--tlskey TLS_KEY_PATH Path to TLS key file
--tlsverify Use TLS and verify the remote
--skip-hostname-check Don't check the daemon's hostname against the name specified
in the client certificate (for example if your docker host
is an IP address)

Commands:
build Build or rebuild services
config Validate and view the compose file
create Create services
down Stop and remove containers, networks, images, and volumes
events Receive real time events from containers
exec Execute a command in a running container
help Get help on a command
kill Kill containers
logs View output from containers
pause Pause services
port Print the public port for a port binding
ps List containers
pull Pulls service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
unpause Unpause services
up Create and start containers
version Show the Docker-Compose version information

Conclusion

Docker Compose is a wonderful tool to easily build a feature rich and scalable app. With its capabilities and simplicity, it is quite useful as a development tool. For more information on the tool and on building apps, please visit its official page

The post How to Build and Run your apps using Docker Compose appeared first on LinOxide.

How to Install Bamboo on CentOS 7

$
0
0

Bamboo is a continuous integration and deployment server. It provides an automated and reliable build/test process for software source-codes. It is an efficient way to manage the build that have different requirements. The build and test processes are triggered automatically on completion of the code. It provides sophisticated methodology for the Software development teams as:

a) An automated building and testing of software source-code
b) Providing updates on successful and failed builds
c) Reporting tools for statistical Analysis
d) Build information

System Requirements for the installation

Hardware Considerations:

  1. The Software only supports 64 bit derived hardware platforms.
  2. The CPU/RAM depends upon the complexity of the plans. For a minimum installation setup I recommend atleast 4 core CPU and 2GB RAM
  3.  20GB storage is the minimum requirement for the installation

Software Considerations:

  1.  Bamboo requires a full Java Development Kit (JDK) platform to be installed on the server. It's purely a Java application and run on any platforms provided all the Java requirements are satisfied.
  2.  It is a Web application, hence needs an application server. Tomcat is the application server used for this.
  3.  It supports almost all popular relational database servers like PostgreSQL, MySQL, Oracle, MicroSoft SQL server etc

In this article, I'm providing the guidelines for the installation of this Web Application on a CentOS 7 server. Let's walk through the installation steps.

1. Check the supported platforms

As mentioned above, you can check and confirm the availability of the system requirements including the hardware and software considerations.

2. Check the Java version

This application requires the JDK 1.8 version to be installed on the server. If you've not installed this. Then make sure you download and install this exact JDK version as required.

[root@server1 kernels]#yum install java-1.8.0-openjdk

Dependencies Resolved

===============================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================
Installing:
java-1.8.0-openjdk x86_64 1:1.8.0.91-0.b14.el7_2 updates 219 k
Installing for dependencies:
alsa-lib x86_64 1.0.28-2.el7 base 391 k
fontconfig x86_64 2.10.95-7.el7 base 228 k
fontpackages-filesystem noarch 1.44-8.el7 base 9.9 k
giflib x86_64 4.1.6-9.el7 base 40 k
java-1.8.0-openjdk-headless x86_64 1:1.8.0.91-0.b14.el7_2 updates 31 M
javapackages-tools noarch 3.4.1-11.el7 base 73 k
libICE x86_64 1.0.9-2.el7 base 65 k
libSM x86_64 1.2.2-2.el7 base 39 k
libXext x86_64 1.3.3-3.el7 base 39 k
libXfont x86_64 1.5.1-2.el7 base 150 k
libXi x86_64 1.7.4-2.el7 base 40 k
libXrender x86_64 0.9.8-2.1.el7 base 25 k
libXtst x86_64 1.2.2-2.1.el7 base 20 k
libfontenc x86_64 1.1.2-3.el7 base 30 k
lksctp-tools x86_64 1.0.13-3.el7 base 87 k
python-javapackages noarch 3.4.1-11.el7 base 31 k
python-lxml x86_64 3.2.1-4.el7 base 758 k
ttmkfdir x86_64 3.0.9-42.el7 base 48 k
tzdata-java noarch 2016d-1.el7 updates 179 k
xorg-x11-font-utils x86_64 1:7.5-20.el7 base 87 k
xorg-x11-fonts-Type1 noarch 7.5-9.el7 base 521 k

Transaction Summary
===============================================================================================================================================
Install 1 Package (+21 Dependent packages)

Total download size: 34 M
Installed size: 110 M

[root@server1 kernels]# echo $JAVA_HOME
[root@server1 kernels]# java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)

2. Install PostgreSQL

Bamboo installation choose PostgreSQL database by default. Install this if you plan to use this database server for this application. You can even use other external databases like MySQL, but you need to connect the application to this external database in that case. JDBC driver for PostgreSQL is bundled with the Bamboo installation. But for any other external application we need to configure Bamboo JDBC connection to the external database. I've choosen to use PostgreSQL as my database server. I've run this command to install this.

root@server1 ~]# yum install postgresql

3. Creating the application user and managing installation/application folders.

It is always recommended to run an application as its dedicated user rather than as root. I created a user to run this application and also created an application data and installation folder prior to the installation. I changed the ownerships of the folders to the dedicated bamboo user created.

root@server1 kernels]# useradd --create-home -c "Bamboo role account" bamboo
[root@server1 bamboo]# mkdir -p /opt/atlassian/bamboo
[root@server1 bamboo]# chown bamboo: /opt/atlassian/bamboo
[root@server1 bamboo]# ls -ld /opt/atlassian/bamboo
drwxr-xr-x 2 bamboo bamboo 4096 Apr 26 05:26 /opt/atlassian/bamboo

Now you can switch to the bamboo user and download the Bamboo installation packages from their website and extract that in the installation folder.

root@server1 bamboo]# su - bamboo
[bamboo@server1 ~]$ cd /opt/atlassian/bamboo
[bamboo@server1 bamboo]$

[bamboo@server1 tmp]$ wget https://www.atlassian.com/software/bamboo/downloads/binary/atlassian-bamboo-5.10.3.tar.gz
--2016-04-26 05:28:54-- https://www.atlassian.com/software/bamboo/downloads/binary/atlassian-bamboo-5.10.3.tar.gz
Resolving www.atlassian.com (www.atlassian.com)... 52.87.106.229, 54.86.154.79
Connecting to www.atlassian.com (www.atlassian.com)|52.87.106.229|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://my.atlassian.com/software/bamboo/downloads/binary/atlassian-bamboo-5.10.3.tar.gz [following]
--2016-04-26 05:28:55-- https://my.atlassian.com/software/bamboo/downloads/binary/atlassian-bamboo-5.10.3.tar.gz
Resolving my.atlassian.com (my.atlassian.com)... 131.103.28.9
Connecting to my.atlassian.com (my.atlassian.com)|131.103.28.9|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://downloads.atlassian.com/software/bamboo/downloads/atlassian-bamboo-5.10.3.tar.gz [following]
--2016-04-26 05:28:55-- https://downloads.atlassian.com/software/bamboo/downloads/atlassian-bamboo-5.10.3.tar.gz
Resolving downloads.atlassian.com (downloads.atlassian.com)... 72.21.81.96
Connecting to downloads.atlassian.com (downloads.atlassian.com)|72.21.81.96|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 214301412 (204M) [application/x-gzip]
Saving to: ‘atlassian-bamboo-5.10.3.tar.gz’

100%[=====================================================================================================>] 214,301,412 62.1MB/s in 3.4s

2016-04-26 05:28:58 (61.0 MB/s) - ‘atlassian-bamboo-5.10.3.tar.gz’ saved [214301412/214301412]

[bamboo@server1 tmp]$ cd -
/opt/atlassian/bamboo
[bamboo@server1 bamboo]$
[bamboo@server1 bamboo]$ tar -xvf /tmp/atlassian-bamboo-5.10.3.tar.gz

Create a symlink to a directory current for the ease of managing the files.

[bamboo@server1 bamboo]$ ln -s atlassian-bamboo-5.10.3 current
[bamboo@server1 bamboo]$ ll
total 4
drwxr-xr-x 13 bamboo bamboo 4096 Mar 14 14:47 atlassian-bamboo-5.10.3
lrwxrwxrwx 1 bamboo bamboo 23 Apr 26 05:30 current -> atlassian-bamboo-5.10.3

Now create and modify the application-data folder location in the Bamboo configuration files.

[root@server1 bamboo]# mkdir -p /var/atlassian/application/bamboo
[root@server1 var]# chown bamboo: /var/atlassian/application/bamboo/
[bamboo@server1 bamboo]$ cat current/atlassian-bamboo/WEB-INF/classes/bamboo-init.properties
## You can specify your bamboo.home property here or in your system environment variables.

#bamboo.home=C:/bamboo/bamboo-home
bamboo.home=/var/atlassian/application/bamboo

It is recommended to keep different folder locations for the installation and storage of this application.

4. Start Bamboo

Now you switch to the bamboo user and move to your installation folder. Run the startup script from the installation folder.

bamboo@server1 current]$ pwd
/opt/atlassian/bamboo/current

[bamboo@server1 current]$ bin/start-bamboo.sh

To run Bamboo in the foreground, start the server with start-bamboo.sh -fg

Server startup logs are located in /home/bamboo/current/logs/catalina.out

Bamboo Server Edition
Version : 5.10.3

If you encounter issues starting or stopping Bamboo Server, please see the Troubleshooting guide at https://confluence.atlassian.com/display/BAMBOO/Installing+and+upgrading+Bamboo

Using CATALINA_BASE: /home/bamboo/current
Using CATALINA_HOME: /home/bamboo/current
Using CATALINA_TMPDIR: /home/bamboo/current/temp
Using JRE_HOME: /
Using CLASSPATH: /home/bamboo/current/bin/bootstrap.jar:/home/bamboo/current/bin/tomcat-juli.jar
Tomcat started.

[bamboo@server1 current]$ tail -f /home/bamboo/current/logs/catalina.out
2016-04-26 07:42:38,834 INFO [localhost-startStop-1] [lifecycle] * Bamboo is starting up *
2016-04-26 07:42:38,834 INFO [localhost-startStop-1] [lifecycle] *******************************
2016-04-26 07:42:38,835 INFO [localhost-startStop-1] [ServletContextHolder] Setting servlet context: Bamboo
2016-04-26 07:42:38,877 INFO [localhost-startStop-1] [lifecycle] atlassian.org.osgi.framework.bootdelegation set to javax.servlet,javax.servlet.*,sun.*,com.sun.*,org.w3c.dom.*,org.apache.xerces.*
2016-04-26 07:42:40,737 INFO [localhost-startStop-1] [lifecycle] Starting Bamboo 5.10.3 (build #51020 Mon Mar 14 14:26:34 UTC 2016) using Java 1.8.0_91 from Oracle Corporation
2016-04-26 07:42:40,737 INFO [localhost-startStop-1] [lifecycle] Real path of servlet context: /home/bamboo/atlassian-bamboo-5.10.3/atlassian-bamboo/
2016-04-26 07:42:40,822 INFO [localhost-startStop-1] [DefaultSetupPersister] Current setup step: setupLicense
2016-04-26 07:42:40,828 INFO [localhost-startStop-1] [lifecycle] Bamboo home directory: /var/atlassian/application/bamboo
2016-04-26 07:42:40,828 INFO [localhost-startStop-1] [lifecycle] Default charset: UTF-8
2016-04-26 07:42:40,841 INFO [localhost-startStop-1] [UpgradeLauncher] Upgrades not performed since the application has not been set up yet.

2016-04-26 07:43:21,900 INFO [localhost-startStop-1] [SessionIdGeneratorBase] Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [41,050] milliseconds

You can make sure the process status.

[root@server1 bamboo]# ps aux | grep bamboo
bamboo 21018 88.5 42.7 2705504 432068 ? Sl 05:54 0:20 //bin/java -Djava.util.logging.config.file=/opt/atlassian/bamboo/current/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xms256m -Xmx384m -Djava.endorsed.dirs=/opt/atlassian/bamboo/current/endorsed -classpath /opt/atlassian/bamboo/current/bin/bootstrap.jar:/opt/atlassian/bamboo/current/bin/tomcat-juli.jar -Dcatalina.base=/opt/atlassian/bamboo/current -Dcatalina.home=/opt/atlassian/bamboo/current -Djava.io.tmpdir=/opt/atlassian/bamboo/current/temp org.apache.catalina.startup.Bootstrap start
root 21041 0.0 0.2 112656 2380 pts/0 S+ 05:54 0:00 grep --color=auto bamboo

You can also create an Init script to manage this application.

5. Creating Init Script

You can create an init script file  /etc.init.d/bamboo and make it executableYou can place this script inside the init script.

[root@server1 bamboo]# cat /etc/init.d/bamboo
#!/bin/sh
set -e
### BEGIN INIT INFO
# Provides: bamboo
# Required-Start: $local_fs $remote_fs $network $time
# Required-Stop: $local_fs $remote_fs $network $time
# Should-Start: $syslog
# Should-Stop: $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Atlassian Bamboo Server
### END INIT INFO
# INIT Script
######################################

# Define some variables
# Name of app ( bamboo, Confluence, etc )
APP=bamboo
# Name of the user to run as
USER=bamboo
# Location of application's bin directory
BASE=/opt/atlassian/bamboo/current

case "$1" in
# Start command
start)
echo "Starting $APP"
/bin/su - $USER -c "export BAMBOO_HOME=${BAMBOO_HOME}; $BASE/bin/startup.sh &> /dev/null"
;;
# Stop command
stop)
echo "Stopping $APP"
/bin/su - $USER -c "$BASE/bin/shutdown.sh &> /dev/null"
echo "$APP stopped successfully"
;;
# Restart command
restart)
$0 stop
sleep 5
$0 start
;;
*)
echo "Usage: /etc/init.d/$APP {start|restart|stop}"
exit 1
;;
esac

exit 0

root@server1 bamboo]# chmod +x /etc/init.d/bamboo
[root@server1 bamboo]# /sbin/chkconfig --add bamboo
[root@server1 bamboo]#
[root@server1 bamboo]# chkconfig --list bamboo

 

bamboo 0:off 1:off 2:on 3:on 4:on 5:on 6:off

 

After starting the application, you can access your Bamboo instance by going to your web browser and entering the address http://45.33.76.60:8085/

6. Configure Bamboo

You need to acquire a valid license for the Bamboo installation from their store here. I've took my Bamboo evaluation license and start up with the installation.

acquirelicense

We need to provide this license information to proceed with the installation. Once the license is provided, you can choose any set-up method according to our preferences for the installation. I choose the Express Installation Method.

expressfinalinstall

7. Setup Administrator User

Now you can create an administrator user to manage the application which is the final installation step.

expressinstallation

This user will have the global administrative privileges for the entire Bamboo installation and should not be deleted.

Once you've entered these details and click Finish. The Bamboo dashboard will be ready to use.

Congratulations, you have successfully set up Bamboo!

Build Dashboard - Atlassian Bamboo 2016-04-26 13-22-20

 

Now we've completed with the installations and set-ups. You can start your work with this application!! I hope you enjoyed reading this article and is useful and informative. Thank you for reading this. I appreciate your valuable comments and suggestions on this.

The post How to Install Bamboo on CentOS 7 appeared first on LinOxide.

How to Install Splunk on CentOS 7

$
0
0

Splunk is one of the most powerful tool for exploring and searching data. It is one of the easiest, faster and secured way to search, analysis, collect and visualize massive data streams in realtime from applications, webservers, databases, server platforms, Cloud networks and many more. The Splunk developers are offering Splunk software packages compatible on different platforms, we can choose the best one which suits our purpose. This software makes it simple to collect, analyze and work upon the unbroached value of massive data generated by any IT enterprise, security systems or any business applications, giving you a total insights to obtain the best operational performance and business results.

There are no official pre-requisites for the installations, but I recommend a proper hostname, firewall and network configuration for the server prior to the installations. This software supports only 64 bit server architecture. In this article, I'm guiding you on how to install Splunk Enterprise version on a CentOS 7 server. Let's walk through the installation steps one by one.

1. Create a Splunk User

It is always recommended to run this application as its dedicated user rather than as root. I created a user to run this application and created a application folder for the installation.

[root@server1 tmp]# groupadd splunk
[root@server1 tmp]# useradd -d /opt/splunk -m -g splunk splunk
[root@server1 tmp]# su - splunk
[splunk@server1 ~]$ id
uid=1001(splunk) gid=1001(splunk) groups=1001(splunk)

Confirm the server architecture

[splunk@server1 ~]$ getconf LONG_BIT
64

2. Download and extract the Splunk Enterprise version

Create a Splunk account and download the Splunk software from their official website here.

Splunk

Now extract the tar file and copy the files to the Splunk application folder namely /opt/splunk created.

root@server1 tmp]# tar -xvf splunk-6.4.0-f2c836328108-Linux-x86_64.tgz
[root@server1 tmp]# cp -rp splunk/* /opt/splunk/
[root@server1 tmp]# chown -R splunk: /opt/splunk/

3. Splunk Installation

Once the Splunk software is downloaded, you can login to your Splunk user and run the installation script. I choose the trial license, so it will take it by default.

root@server1 tmp]# su - splunk
Last login: Fri Apr 29 08:14:12 UTC 2016 on pts/0

[splunk@server1 ~]$ cd bin/
[splunk@server1 bin]$ ./splunk start --accept-license

This appears to be your first time running this version of Splunk.

Copying '/opt/splunk/etc/openldap/ldap.conf.default' to '/opt/splunk/etc/openldap/ldap.conf'.
Generating RSA private key, 1024 bit long modulus
.++++++
..................++++++
e is 65537 (0x10001)
writing RSA key

Generating RSA private key, 1024 bit long modulus
................++++++
..++++++
e is 65537 (0x10001)
writing RSA key

Moving '/opt/splunk/share/splunk/search_mrsparkle/modules.new' to '/opt/splunk/share/splunk/search_mrsparkle/modules'.

Splunk> Australian for grep.

Checking prerequisites...
Checking http port [8000]: open
Checking mgmt port [8089]: open
Checking appserver port [127.0.0.1:8065]: open
Checking kvstore port [8191]: open
Checking configuration... Done.
Creating: /opt/splunk/var/lib/splunk
Creating: /opt/splunk/var/run/splunk
Creating: /opt/splunk/var/run/splunk/appserver/i18n
Creating: /opt/splunk/var/run/splunk/appserver/modules/static/css
Creating: /opt/splunk/var/run/splunk/upload
Creating: /opt/splunk/var/spool/splunk
Creating: /opt/splunk/var/spool/dirmoncache
Creating: /opt/splunk/var/lib/splunk/authDb
Creating: /opt/splunk/var/lib/splunk/hashDb
Checking critical directories... Done
Checking indexes...
Validated: _audit _internal _introspection _thefishbucket history main summary
Done
New certs have been generated in '/opt/splunk/etc/auth'.
Checking filesystem compatibility... Done
Checking conf files for problems...
Done
Checking default conf files for edits...
Validating installed files against hashes from '/opt/splunk/splunk-6.4.0-f2c836328108-linux-2.6-x86_64-manifest'
All installed files intact.
Done
All preliminary checks passed.

Starting splunk server daemon (splunkd)...
Generating a 1024 bit RSA private key
.....................++++++
...........................++++++
writing new private key to 'privKeySecure.pem'
-----
Signature ok
subject=/CN=server1.centos7-test.com/O=SplunkUser
Getting CA Private Key
writing RSA key
Done
[ OK ]

Waiting for web server at http://127.0.0.1:8000 to be available.... Done
If you get stuck, we're here to help.
Look for answers here: http://docs.splunk.com

The Splunk web interface is at http://server1.centos7-test.com:8000

Now you can access your Splunk Web interface at http://IP:8000/ or http://hostname:8000. You need to make sure this port 8000 is open in your server firewall.

4. Configuring Splunk Web Interface

I've completed with my installation and I've my Splunk Service up & running in my server. Now I need to set-up my Splunk Web interface. I accessed my Splunk web interface and set my administrator password.

splunks1

First time when you're accessing the Splunk interface, you can use the user/password provided in the page which is admin/changeme in this case. Once logged in, on the very next page it will ask to change and confirm your new password.

splunk2

Now, you've set your admin password. Once you log in with the new password, you will have your Splunk Dashboard ready to use.

splunkhome

There are different categories listed over in the home page. You can choose the required one and start splunking.

6. Adding a task

I'm adding an example for a simple task which is been added to the Splunk system. Just see my snapshots to understand how I added it. My task is to add /var/log folder to the Splunk system for monitoring.

  1. Open up the Splunk Web interface. Click on the Settings Tab >> Choose the Add Data option

add data

2. The Add Data Tab opens up with three options : Upload, Monitor and Forward. Here our task is to monitor a folder, so we go ahead with Monitor.

monitor

In the Monitor option, there are four categories as below:

File & Directories : To monitor files/folders

HTTP Event Collector : Monitor data streams over HTTP

TCP/UDP : Monitor Service ports

Scripts : Monitor Scripts

3. According to our purpose, I choose the Files & Directories option.

files-folders

4. Now, I'm choosing the exact folder path from the server to monitor. Once you confirm with the settings, you can click Next and Review.

var-log

var-log2

var-log3

 

5. Now you can start searching and monitoring the log file as required.

var-log4

donemonitor

You can just see the logs been narrowed to one of my REDIS application on the server.

redis_splunk

This is just a simple example for Splunking, you can add as many tasks to this and explore your server data. I hope this article is informative and useful for you. Thank you for reading this :) I recommend your valuable suggestions and comments on this. Now just try Splunk!!

Enjoy Splunking :)

The post How to Install Splunk on CentOS 7 appeared first on LinOxide.


How to Install OpenDCIM on CentOS 7

$
0
0

Data center infrastructure management (DCIM) is a growing challenge for data center managers, and a hot market for software vendors. The openDCIM project offers an open source alternative for companies seeking to improve their asset tracking and capacity planning. OpenDCIM is used for managing the infrastructure of a data center, no matter how small or large. DCIM means many different things to many different people, and there is a multitude of commercial applications available but openDCIM does not contend to be a function by function replacement for commercial applications. It was Initially developed in-house at Vanderbilt University Information Technology Services by Scott Milliken. The software is released under the GPL v3 license that is free to modify it, and share it with others, as long as you acknowledge where it came from.

The main goal for openDCIM is to eliminate the excuse for anybody to ever track their data center inventory using a spreadsheet or word processing document again. It provide complete physical inventory of the data center.

Features of OpenDCIM:

Following are some of its other useful features of OpenDCIM.

  • Support for Multiple Rooms (Data Centers)
  • Computation of Center of Gravity for each cabinet
  • Template management for devices, with ability to override per device
  • Optional tracking of cable connections within each cabinet, and for each switch device
  • Archival functions for equipment sent to salvage/disposal
  • Management of the three key elements of capacity management - space, power, and cooling
  • Basic contact management and integration into existing business directory via UserID
  • Fault Tolerance Tracking - run a power outage simulation to see what would be affected as each source goes down

Prerequisites:

In order to install OpenDCIM on CentOS 7, we need to compete the following requirements on our server.

  • Web host running Apache 2.x (or higher) with an SSL Enabled site
  • MySQL 5.x (or higher) database
  • PHP 5.x (or higher)
  • User Authentication
  • Web Based Client

Installing Apache, PHP, MySQL

Our first step is to make sure that the complete LAMP stack has been properly configured with Apache/PHP and MySQL/MariaDB running.

To do let's run the following command on your CentOS 7 server to install Apache, PHP with few of its required modules and MySQL-MariaDB server.

# yum install httpd php php-mysql php-mbstring mariadb-server

After resolving dependencies the following number of shown packages will be installed on your system after you type 'y' and hit Enter key.

LAMP packages

Start and Enable Apache/MySQL services:

Once the packages are installed then using the following commands start and enable the services of Apache and Mysql server and check their status that should be active and running.

# systemctl enable httpd.service
# systemctl start httpd.service

# systemctl enable mariadb.service
# systemctl start mariadb.service

starting services

Create database for openDCIM

Before creating the database for OpenDCIM, Secure your MySQL/MariaDB server by doing the following tasks after running the command as shown.

# mysql_secure_installation

  • Set a root password
  • Remove anonymous users
  • Disallow root login remotely
  • Remove test database and access to it
  • Reload privilege tables

securing MysQL

Now create a database for openDCIM after conecting to the MariaDB.

# mysql -u root -p

MariaDB [(none)]> create database dcim;
MariaDB [(none)]> grant all privileges on dcim.* to 'dcim' identified by 'password';
MariaDB [(none)]> exit

OpenDCIM database

Enable HTTPS

Run the command below to install 'mod_ssl' package on your CentOS 7 server

# yum -y install mod_ssl

Once the package installed, generate the necessary keys and copy them to the proper directories using below commands.

# cd /root

# openssl genrsa -out ca.key 1024

# openssl req -new -key ca.key -out ca.csr

# openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt

# cp ca.crt /etc/pki/tls/certs

# cp ca.key /etc/pki/tls/private/ca.key

# cp ca.csr /etc/pki/tls/private/ca.csr

Enabling https

Setup Server Name:

To set the server name of your server open the default web configuration in your editor by searching the 'ServerName' in it and add the following line.

# vim +/ServerName /etc/httpd/conf/httpd.conf

ServerName opendcim_server_name:443

Save and close the configuration file using ':wq!' and then restart apache web services.

# systemctl restart httpd.service

New Virual Host for OpenDCIM:

Create a new configuration file for the openDCIM VirtualHost and put the following configuration in it.

# vim /etc/httpd/conf.d/opendcim_server_name.conf

SSLEngine On
SSLCertificateFile /etc/pki/tls/certs/ca.crt
SSLCertificateKeyFile /etc/pki/tls/private/ca.key
ServerAdmin you@example.net
DocumentRoot /opt/openDCIM/opendcim
ServerName opendcim.example.net

AllowOverride All
AuthType Basic
AuthName "openDCIM"
AuthUserFile /opt/openDCIM/opendcim/.htpasswd
Require valid-user
OpenDCIM VH

After save and closing the file now we need to enable basic user authentication to protect the openDCIM web directory by configuring the files we mentioned in above configuration file.

Let's run below commands to create a user after create a new file as shown.

# mkdir -p /opt/openDCIM/opendcim

# touch /opt/openDCIM/opendcim/.htpasswd

# htpasswd /opt/openDCIM/opendcim/.htpasswd Administrator

Let's open Web Access on firewall as on CentOS 7 FirewallD is enabled by default, and blocks access to HTTPS port on 443.

# firewall-cmd --zone=public --add-port=443/tcp --permanent
success

# firewall-cmd --reload
success

opendcim user setup

Download and Install openDCIM

After finishing the configuration of the server, now we need to download the openDCIM package from their Office Web Page.

OpenDCIM Download

Use below commands to get the package on your server.

# cd /opt/openDCIM/

# curl -O http://opendcim.org/packages/openDCIM-4.2.tar.gz

Extract the archive and create a symbolic link by flowing below commands.

# tar zxf openDCIM-4.2.tar.gz

# ln -s openDCIM-4.2-release opendcim

You can also rename the directory openDCIM-4.2-release to 'opendcim' in case if you don't want to create the symbolic link.

Configuring OpenDCIM:

Now, prepare the configuration file for access to the database we have created earlier.

# cd /opt/openDCIM/opendcim

# cp db.inc.php-dist db.inc.php

# vim db.inc.php

$dbhost = 'localhost';
$dbname = 'dcim';
$dbuser = 'dcim';
$dbpass = 'dcimpassword';

# systemctl restart httpd.service

Access OpenDCIM Web Portal:

Now open openDCIM in your browser to proceed with the web based installation.

https://your_server_name_or_IP/

You will be asked for authentication and after proving the user name and Password you will be directed to OpenDCIM web page where you will be asked to create new Department as shown.

OpenDCIM Web

After completing these parameters, switch to the Data Center and give your new Data Center details.

OpenDCIM Data center

Once you have created an Data center then you will be able to create its cabinet inventory.

OpenDCIM Cabinet

You have done, and finished the basic configurations of OpenDCIM .

OpenDCIM installed

Conclusion:

Thank you for being with us, we have successfully setup OpenDCIM on our CentOS 7 server. Now you can easily manage your Data centers no matter how small or large environment you have. Do share your experience and leave your valuable comments.

The post How to Install OpenDCIM on CentOS 7 appeared first on LinOxide.

Getting Started with Ansible on Command Line

$
0
0

ANSIBLE is an open source software platform for configuration management, provisioning, application deployment and service orchestration. It can be used for configuring our servers in production, staging and developments. It can also be used in managing application servers like Webservers, database servers and many others. Other systems similar to configuration management is CHEF, Puppet, SALT and Distelli, compared to all these ANSIBLE is the most simple and easily managed tool. The main advantage of using Ansible is as follows:

1. Modules can be written in any programming language.
2. No agent running on the client machine.
3. Easy to install and Manage.
4. Highly reliable.
5. Scalable

In this article, I'll explain some of the basics about your first steps with Ansible.

Understanding the hosts file

Once you've installed Ansible, the first thing is to understand its inventory file. This files contains the list of target servers which are managed by Ansible. The default hosts file location is /etc/ansible/hosts. We can edit this file to include our target systems. This file specifies several groups in which you can classify your hosts as your preference.

ansible_hosts

As mentioned here, important things to note in creating the hosts file:

# - Comments begin with the '#' character
# - Blank lines are ignored
# - Groups of hosts are delimited by [header] elements
# - You can enter hostnames or IP addresses
# - A hostname/ip can be a member of multiple groups
# - Remote hosts can have assignments in more than one groups
# - Include host ranges in one string as server-[01:12]-example.com

PS : It's not recommended to make modifications in the default inventory file, instead we can create our own custom inventory files at any locations as per our convenience.

How Ansible works?

First of all, Ansible admin client connects to the target server using SSH. We don't need to setup any agents on the client servers. All you need is Python and a user that can login and execute the scripts. Once the connection is established, then it starts gathering facts about the client machine like operating systems, services running and packages. We can execute different commands, copy/modify/delete files & folders, manage or configure packages and services using Ansible easily. I'll demonstrate it with the help of my demo setup.

My client servers are 45.33.76.60 and 139.162.35.39. I created my custom inventory hosts file under my user. Please see my inventory file with three groups namely webservers, production and testing.

In webservers, I've included two of my client servers. And separated them in two other groups as one in production and other in testing.

linuxmonty@linuxmonty-Latitude-E4310:~$ cat hosts
[webservers]
139.162.35.39
45.33.76.60

[production]
139.162.35.39

[Testing]
45.33.76.60

Establishing SSH connections

We need to create the SSH keys for the Admin server and copy this over to the target servers to enhance the SSH connections. Let's take a look on how I did that for my client servers.

linuxmonty@linuxmonty-Latitude-E4310:~$ # ssh-keygen -t rsa -b4096
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
2e:2f:32:9a:73:6d:ba:f2:09:ac:23:98:0c:fc:6c:a0 linuxmonty@linuxmonty-Latitude-E4310
The key's randomart image is:
+--[ RSA 4096]----+
| |
| |
| |
| |
|. S |
|.+ . |
|=.* .. . |
|Eoo*+.+o |
|o.+*=* .. |
+-----------------+

Copying the SSH keys

This is how we copy the SSH keys from Admin server to the target servers.

Client 1:

linuxmonty@linuxmonty-Latitude-E4310# ssh-copy-id root@139.162.35.39
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@139.162.35.39's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@139.162.35.39'"
and check to make sure that only the key(s) you wanted were added.

linuxmonty@linuxmonty-Latitude-E4310#

Client 2:

linuxmonty@linuxmonty-Latitude-E4310# ssh-copy-id root@45.33.76.60
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@139.162.35.39's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@45.33.76.60'"
and check to make sure that only the key(s) you wanted were added.

linuxmonty@linuxmonty-Latitude-E4310#

Once you execute these commands from your admin server, your keys will be added to the target servers and saved in the authorized keys.

Familiarizing some basic Ansible Modules

Modules controls system resources, configuration, packages, files etc. There are about 450+ modules used in Ansible. First of all, let's use the module to check the connectivity between your admin server and the target servers. We can run the ping module from your Admin server  to confirm the connectivity.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m ping -u root
139.162.35.39 | success >> {
"changed": false,
"ping": "pong"
}

45.33.76.60 | success >> {
"changed": false,
"ping": "pong"
}
-i : Represents the inventory file selection
-m : Represents the module name selection
-u : Represents the user for execution.

Since you're running this command as a user to connect to the target servers, you need to switch to the root user for module execution.

This is how to check the inventory status in the hosts file.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts webservers --list-hosts
139.162.35.39
45.33.76.60
linuxmonty@linuxmonty-Latitude-E4310:~$
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production --list-hosts
139.162.35.39
linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts Testing --list-hosts
45.33.76.60

Setup Module:

Now you run the setup module to gather more facts about your target servers to organize your playbooks. This module provides you the information about the server hardware, network and some of the ansible-related software settings. These facts can be described in the playbooks section and represent discovered variables about your system. These can be also used to implement conditional execution of tasks.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m setup -u root

setup

You can view the server architecture, CPU information, python version, memory, OS version etc by running this module.

Command Module:

Here are some examples of the command module usage. We can pass any argument to this command module to execute.

uptime:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m command -a 'uptime' -u root
139.162.35.39 | success | rc=0 >>
14:55:31 up 4 days, 23:56, 1 user, load average: 0.00, 0.01, 0.05

45.33.76.60 | success | rc=0 >>
14:55:41 up 15 days, 3:20, 1 user, load average: 0.20, 0.07, 0.06

hostname:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m command -a 'hostname' -u root
139.162.35.39 | success | rc=0 >>
client2.production.com

45.33.76.60 | success | rc=0 >>
client1.testing.com

w:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m command -a 'w' -u root
139.162.35.39 | success | rc=0 >>
08:07:55 up 4 days, 17:08, 2 users, load average: 0.00, 0.01, 0.05
USER TTY LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 07:54 7:54 0.00s 0.00s -bash
root pts/1 08:07 0.00s 0.05s 0.00s w

45.33.76.60 | success | rc=0 >>
08:07:58 up 14 days, 20:33, 2 users, load average: 0.03, 0.03, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 101.63.79.157 07:54 8:01 0.00s 0.00s -bash
root pts/1 101.63.79.157 08:07 0.00s 0.05s 0.00s w

Similarly, we can execute any linux commands  across multiple target servers using the command module in Ansible.

Managing User and Groups

Ansible provides a module called "user" which server this purpose. The ‘user’ module allows easy creation and manipulation of existing user accounts, as well as removal of the existing user accounts as per our needs.

Usage : # ansible -i inventory selection -m user -a "name=username1 password=<crypted password here>"

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m user -a "name=adminsupport password=<default123>" -u root
45.33.76.60 | success >> {
"changed": true,
"comment": "",
"createhome": true,
"group": 1004,
"home": "/home/adminsupport",
"name": "adminsupport",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/bin/bash",
"state": "present",
"system": false,
"uid": 1004
}

In the above server, this command initiates the creation of this adminsupport user. But in the server 139.162.35.39 this user is already present, hence, it skips any other modifications for that user.

139.162.35.39 | success >> {
"changed": true,
"comment": "",
"createhome": true,
"group": 1001,
"home": "/home/adminsupport",
"name": "adminsupport",
"password": "NOT_LOGGING_PASSWORD",
"shell": "/bin/bash",
"state": "present",
"stderr": "useradd: warning: the home directory already exists.\nNot copying any file from skel directory into it.\nCreating mailbox file: File exists\n",
"system": false,
"uid": 1001
}

Usage : ansible -i inventory selection -m user -a 'name=username state=absent'

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts Testing -m user -a "name=adminsupport state=absent" -u root
45.33.76.60 | success >> {
"changed": true,
"force": false,
"name": "adminsupport",
"remove": false,
"state": "absent"
}

This command deletes the user adminsupport from our Testing server 45.33.76.60.

File Transfers

Ansible provides a module called "copy" to enhance the file transfers across multiple servers. It can securely transfer a lot of files to multiple servers in parallel.


Usage : ansible -i inventory selection -m copy -a "src=file_name dest=file path to save"

I'm copying a shell script called test.sh from my admin server to all my target servers /root. Please see the command usage below:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m copy -a "src=test.sh dest=/root/" -u root
139.162.35.39 | success >> {
"changed": true,
"dest": "/root/test.sh",
"gid": 0,
"group": "root",
"md5sum": "d910e95fdd8efd48d7428daafa7706ec",
"mode": "0755",
"owner": "root",
"size": 107,
"src": "/root/.ansible/tmp/ansible-tmp-1463040011.67-93143679295729/source",
"state": "file",
"uid": 0
}

45.33.76.60 | success >> {
"changed": true,
"dest": "/root/test.sh",
"gid": 0,
"group": "root",
"md5sum": "d910e95fdd8efd48d7428daafa7706ec",
"mode": "0755",
"owner": "root",
"size": 107,
"src": "/root/.ansible/tmp/ansible-tmp-1463040013.85-235107847216893/source",
"state": "file",
"uid": 0
}

Output Result:

[root@client2 ~]# ll /root/test.sh
-rwxr-xr-x 1 root root 107 May 12 08:00 /root/test.sh

If you use playbook, you can take advantage of the module template to perform the same task.

It also provides a module called "file" which will help us to change the ownership and permissions of the files across multiple servers. We can pass these options directly to the "copy" command. This module can also be used to create or delete the files/folders.

Example :

I've modified the ownerships and groups for an existing file test.sh on the destination server and changed its permission to 600.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m file -a "dest=/root/test.sh mode=600 owner=adminsupport group=adminsupport" -u root
139.162.35.39 | success >> {
"changed": true,
"gid": 1001,
"group": "adminsupport",
"mode": "0600",
"owner": "adminsupport",
"path": "/root/test.sh",
"size": 107,
"state": "file",
"uid": 1001
}

Output :

[root@client2 ~]# ll | grep test.sh
-rw------- 1 adminsupport adminsupport 107 May 12 08:00 test.sh

Creating A folder

Now, I need to create a folder with a desired ownership and permissions. Let's see the command to acquire that. I'm creating a folder "ansible" on my production server group and assign it to the owner "adminsupport" with 755 permissions.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m file -a "dest=/root/ansible mode=755 owner=adminsupport group=adminsupport state=directory" -u root
139.162.35.39 | success >> {
"changed": true,
"gid": 1001,
"group": "adminsupport",
"mode": "0755",
"owner": "adminsupport",
"path": "/root/ansible",
"size": 4096,
"state": "directory",
"uid": 1001
}

Output :

[root@client2 ~]# ll | grep ansible
drwxr-xr-x 2 adminsupport adminsupport 4096 May 12 08:45 ansible
[root@client2 ~]# pwd
/root

Deleting a folder

We can even use this module for deleting folders/files from multiple target servers. Please see how I did that.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m file -a "dest=/root/ansible state=absent" -u root
139.162.35.39 | success >> {
"changed": true,
"path": "/root/ansible",
"state": "absent"
}

The only variable that determines the operation is the arbitrary variable called "state", it is modified to absent to delete that particular folder from the server.

Managing Packages

Let's see how we can manage packages using Ansible. We need to identify the platform of the target servers and use the desired package manager modules like yum or  apt that suits the purpose. We can use apt or yum according to the target servers OS version. It also has modules for managing packages under many platforms.

Installing a VsFTPD package on my production server in my inventory.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=present' -u root
139.162.35.39 | success >> {
"changed": true,
"msg": "",
"rc": 0,
"results": [
"Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: mirrors.linode.com\n * epel: mirror.wanxp.id\n * extras: mirrors.linode.com\n * updates: mirrors.linode.com\nResolving Dependencies\n--> Running transaction check\n---> Package vsftpd.x86_64 0:3.0.2-11.el7_2 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n vsftpd x86_64 3.0.2-11.el7_2 updates 167 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 167 k\nInstalled size: 347 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n Verifying : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n\nInstalled:\n vsftpd.x86_64 0:3.0.2-11.el7_2 \n\nComplete!\n"
]
}

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=present' -u root
139.162.35.39 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"vsftpd-3.0.2-11.el7_2.x86_64 providing vsftpd is already installed"
]
}

If you notice, you can see that, when I execute the ansible command to install the package first time, the "changed" variable was "true" which means this command has installed the package. But when I run that command again, it reported the variable "changed" as "false" which means the command checked for the package installation and it found that to be already installed, so nothing was done on that server.

Similarly, we can update or delete a package, the only variable which determines that is the state variable which can be modified to latest to install the latest available package and absent to remove the package from the server.

Examples:

Updating the package:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=latest' -u root
139.162.35.39 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"All packages providing vsftpd are up to date"
]
}

This claims that the installed software is already in the latest version and there are no available updates.

Removing the package:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=absent' -u root
139.162.35.39 | success >> {
"changed": true,
"msg": "",
"rc": 0,
"results": [
"Loaded plugins: fastestmirror\nResolving Dependencies\n--> Running transaction check\n---> Package vsftpd.x86_64 0:3.0.2-11.el7_2 will be erased\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nRemoving:\n vsftpd x86_64 3.0.2-11.el7_2 @updates 347 k\n\nTransaction Summary\n================================================================================\nRemove 1 Package\n\nInstalled size: 347 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Erasing : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n Verifying : vsftpd-3.0.2-11.el7_2.x86_64 1/1 \n\nRemoved:\n vsftpd.x86_64 0:3.0.2-11.el7_2 \n\nComplete!\n"
]
}

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts production -m yum -a 'name=vsftpd state=absent' -u root
139.162.35.39 | success >> {
"changed": false,
"msg": "",
"rc": 0,
"results": [
"vsftpd is not installed"
]
}

First time when we run the ansible command it removed the VsFTPD package and then we run it again to confirm that there is no package existing in the server now.

Managing Services

It is necessary to manage the services which are installed on the target servers. Ansible provides the module service to attain that. We can use this module to enable on-boot and start/stop/restart services. Please see the examples for each case.

Starting/Enabling a Service:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m service -a 'name=nginx enabled=yes state=started' -u root
45.33.76.60 | success >> {
"changed": false,
"enabled": true,
"name": "nginx",
"state": "started"
}

139.162.35.39 | success >> {
"changed": false,
"enabled": true,
"name": "nginx",
"state": "started"
}

Stopping a Service:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m service -a 'name=nginx state=stopped' -u root
139.162.35.39 | success >> {
"changed": true,
"name": "nginx",
"state": "stopped"
}

45.33.76.60 | success >> {
"changed": true,
"name": "nginx",
"state": "stopped"
}

Restarting a Service:

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible -i hosts all -m service -a 'name=nginx state=restarted' -u root
139.162.35.39 | success >> {
"changed": true,
"name": "nginx",
"state": "started"
}

45.33.76.60 | success >> {
"changed": true,
"name": "nginx",
"state": "started"
}

As you can see the state variable is modified to started, restarted and stopped status to manage the service.

Playbooks

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can assign different roles, perform tasks like copying or deleting files/folders, make use of mature modules that shifts most of the functionality or substitute variables to make your deployments dynamic and re-usable.

Playbooks define your deployment steps and configuration. They are modular and can contain variables. They can be used to orchestrate steps across multiple machines. They are configuration files written in simple YAML file which is the Ansible automation language. They can contain multiple tasks and can make use of "mature" modules.

Here is an example of a simple Playbook.

linuxmonty@linuxmonty-Latitude-E4310:~$ cat simpleplbook.yaml
---

- hosts: production
remote_user: root

tasks:
- name: Setup FTP
yum: pkg=vsftpd state=installed
- name: start FTP
service: name=vsftpd state=started enabled=yes

This is a simple playbook with two tasks as below:

  1. Install FTP server
  2. Ensure the Service status

Let's see each statement in details

- hosts: production  -   This selects the inventory host to initiate this process.

remote_user: root  - This specifies the user which is meant to execute this process on the target servers.

tasks:
1. - name: Setup FTP
2. yum: pkg=vsftpd state=installed
3. - name: start FTP
4. service: name=vsftpd state=started enabled=yes

These specifies the two tasks which is meant to be performed while running this playbook.  We can divide it to four statements for more clarity. First statement describes the task which is setting up an FTP server and the second statement performs that by choosing/installing the package on the target server. Third statement describes the next task and fourth one ensure the service status by starting the FTP server and enable it on boot.

Now let' see the output of this playbook.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-playbook -i hosts simpleplbook.yaml

PLAY [production] *************************************************************

GATHERING FACTS ***************************************************************
ok: [139.162.35.39]

TASK: [Setup FTP] *************************************************************
changed: [139.162.35.39]

TASK: [start FTP] *************************************************************
changed: [139.162.35.39]

PLAY RECAP ********************************************************************
139.162.35.39 : ok=3 changed=2 unreachable=0 failed=0

We can see that playbooks are executed sequentially according to the tasks specified in the playbook. First it chooses the inventory and then it starts performing the plays one by one.

Application Deployments

I'm going to set-up my webservers using a playbook. I created a playbook for my "webservers" inventory group. Please see my playbook details below:

linuxmonty@linuxmonty-Latitude-E4310:~$ cat webservers_setup.yaml
---

- hosts: webservers
vars:
- Welcomemsg: "Welcome to Ansible Application Deployment"

tasks:
- name: Setup Nginx
yum: pkg=nginx state=installed
- name: Copying the index page
template: src=index.html dest=/usr/share/nginx/html/index.html
- name: Enable the service on boot
service: name=nginx enabled=yes
- name: start Nginx
service: name=nginx state=started

Now let us run this playbook from my admin server to deploy it.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-playbook -i hosts -s webservers_setup.yaml -u root

PLAY [webservers] *************************************************************

GATHERING FACTS ***************************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Setup Nginx] ***********************************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [Copying the index page] ************************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [Enable the service on boot] ********************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [start Nginx] ***********************************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

PLAY RECAP ********************************************************************
139.162.35.39 : ok=5 changed=4 unreachable=0 failed=0
45.33.76.60 : ok=5 changed=4 unreachable=0 failed=0

This playbook describes four tasks as evident from the result. After running this playbook, we can confirm the status by checking the target servers in the browser.

ansiblewebserver

Now, I'm planning to add a PHP Module namely php-gd to the target servers. I can edit my playbook to include that task too and run it again. Let's see what happens now. My modified playbook is as below:

linuxmonty@linuxmonty-Latitude-E4310:~$ cat webservers_setup.yaml
---

- hosts: webservers
vars:
- Welcomemsg: "Welcome to Nginx default page"
- WelcomePHP: "PHP GD module enabled"

tasks:
- name: Setup Nginx
yum: pkg=nginx state=installed
- name: Copying the index page
template: src=index.html dest=/usr/share/nginx/html/index.html
- name: Enable the service on boot
service: name=nginx enabled=yes
- name: start Nginx
service: name=nginx state=started
- name: Setup PHP-GD
yum: pkg=php-gd state=installed

As you can see, I append these highlighted lines to my playbook. So this is how it goes now.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-playbook -i hosts -s webservers_setup.yaml -u root

PLAY [webservers] *************************************************************

GATHERING FACTS ***************************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Setup Nginx] ***********************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Copying the index page] ************************************************
changed: [139.162.35.39]
changed: [45.33.76.60]

TASK: [Enable the service on boot] ********************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [start Nginx] ***********************************************************
ok: [139.162.35.39]
ok: [45.33.76.60]

TASK: [Setup PHP-GD] **********************************************************
changed: [45.33.76.60]
changed: [139.162.35.39]

PLAY RECAP ********************************************************************
139.162.35.39 : ok=6 changed=2 unreachable=0 failed=0
45.33.76.60 : ok=6 changed=2 unreachable=0 failed=0

On close analysis of this result, you can see that only two sections in this have reported modifications to the target servers. One is the index file modification and other is the installation of our additional PHP module. Now we can evident the changes for the target servers in the browser.

PHPmodule+Nginx

Roles

Ansible roles are a special kind of playbooks that are fully self-contained and portable. The roles contains tasks, variables, configuration templates and other supporting tasks that needs to complete complex orchestration. Roles can be used to simplify more complex operations. You can create different roles like common, webservers, db_servers etc categorizing with different purpose and include in the main playbook by just mentioning the roles.  This is how we create the roles.

linuxmonty@linuxmonty-Latitude-E4310:~$ ansible-galaxy init common
common was created successfully

Now, I've created a role named common to perform some of the common tasks in all my target servers. Each role contains their individual tasks, configuration templates, variables, handlers etc.

total 40
drwxrwxr-x 9 linuxmonty linuxmonty 4096 May 13 14:06 ./
drwxr-xr-x 34 linuxmonty linuxmonty 4096 May 13 14:06 ../
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 defaults/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 files/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 handlers/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 meta/
-rw-rw-r-- 1 linuxmonty linuxmonty 1336 May 13 14:06 README.md
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 tasks/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 templates/
drwxrwxr-x 2 linuxmonty linuxmonty 4096 May 13 14:06 vars/

We can create our YAML file inside each of these folders as per our purpose. Later on, we can run all these tasks by just specifying these roles inside a playbook. You can get more details on Ansible roles here.

I hope this documentation provided you with the basic knowledge on how to manage your servers with Ansible. Thank you for reading this. I would recommend your valuable suggestions and comments on this.

Happy Automation!!

The post Getting Started with Ansible on Command Line appeared first on LinOxide.

How to Install Docker Engine in Ubuntu 16.04 LTS Xenial

$
0
0

Docker is a free and open source project for the automation of deployment of apps under software containers that provides an open platform to pack, ship and run any application any where. It makes an awesome use of the resource isolation features of the linux kernel such as cgroups, kernel namespaces, and union-capable file system. It is pretty easy and simple for deploying and scaling web apps, databases and back-end services independent on a particular stack or provider. The latest release ie version 1.11.1 consists of many additional features and bug fixes. In this article, we'll be installing the latest Docker Engine 1.11.1 in a machine running Ubuntu 16.04 LTS "Xenial" .

System Requirements

Following are the system requirements that are essential to run the latest Docker Engine in Ubuntu 16.04 LTS Xenial.

  • It currently requires 64 bit version of host to run so, we'll require a 64 bit version of Ubuntu Xenial installed on the host.
  • As we require to download images of containers frequently, we'll require a good internet connectivity in the host.
  • Make sure that the machine's CPU supports virtualization technology and virtualization support is enabled in BIOS.
  • Ubuntu Xenial running Linux kernel version 3.8 and above are supported.

Updating and Upgrading Xenial

First of all, we'll need to update the local repository index of the Ubuntu repositories from the nearest mirror service so that we have the index of all the latest packages available on the repository through internet. To do so, we'll need to run the following command in a terminal or console.

$ sudo apt-get update

As our local repository index has been updated, we'll upgrade our Ubuntu Xenial to the latest packages available in the repositories via apt-get package manager.

$ sudo apt-get upgrade

Installing Docker Engine

Once our system has been upgraded, we'll move towards the installation of the latest Docker Engine ie version 1.11 in our machine running the latest and greatest Ubuntu 16.04 Xenial LTS. We have many ways to install it in Ubuntu, either we run a simple script written by the official developers or we manually add the Docker's official repository and install it. Here, in this tutorial, we'll show both methods to install Docker Engine.

Manual Installation

1. Adding the Repository

First of all, we'll need to add the new GPG key for our docker repository.

$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

Adding GPG Key

As the new GPG key for docker repository has been added to our machine, we'll now need to add the repository source to our apt source list. To do so, we'll use a text editor and create a file named docker.list under /etc/apt/sources.list.d/ directory.

$ sudo nano /etc/apt/sources.list.d/docker.list

Then, we'll gonna add the following of line into that file in order to add the repository to the apt's source..

deb https://apt.dockerproject.org/repo ubuntu-xenial main

Adding Docker Repository

2. Updating the APT's Index

As our repository for docker has been addded, we'll now gonna update the local repository index of APT package manager so that we can use it to install the latest release. In order to update the local repository index, we'll need to run the following command inside a terminal or console.

$ sudo apt-get update

3. Installing Linux Kernel Extras

Now, as its recommended, we'll gonna install the Linux Kernel Extras in our machine running Ubuntu Xenial. We'll need to install this package as its important for us to enable the use of aufs storage driver. So, to install the linux-image-extras kernel package in our machine, we'll need to run the following command.

$ sudo apt-get install linux-image-extra-$(uname -r)

Installing Linux Image Extras

Here, as we have linux kernel 4.4.0-22 installed and running, the linux kernel extras of the respective kernel will be installed.

4. Installing Docker Engine

Once everything is setup and done, we'll now go towards the main part of the work where we'll install the latest docker engine in our latest Ubuntu 16.04 LTS Xenial machine. To do so, we'll need to run the following simple apt-get command.

$ sudo apt-get install docker-engine

Installing Docker Engine

Finally, we are done installing Docker Engine, once we are done the installation process, we'll now move towards the next step where we'll add our current running user to the docker group.

One-Script installation

If we wanna automate everything done above in the Manual installation method, we'll need to follow the this step. As said above, Docker developers have written an awesome script that will install docker engine in our machine running Ubuntu 16.04 LTS Xenial fully automated. This method is pretty fast, easy and simple to perform. A person with little knowledge of Ubuntu 16.04 can easily install docker using this script. So, before we start, we'll need to make sure that wget is installed in our machine. To install wget downloader, we'll need to run the following command.

$ sudo apt-get install wget

Once get downloader is installed in our machine, we'll need to run the following wget command in order to run the docker's official script to install the latest Docker Engine.

$ wget -qO- https://get.docker.com/ | sh

Adding User to Docker Group

Now, we'll gonna add our users to the docker group, doing so will allow docker daemon to provide permissions to the users under group docker to have authentication to run and manage the docker containers.

$ sudo usermod -aG docker arun

Once done, we'll need to logout and again login to the system to apply the changes into effect.

Starting the Docker Daemon

Next, we'll gonna start our Docker Daemon so that we can run, manage and control containers, images in our Ubuntu machine. As Ubuntu 16.04 LTS Xenial runs systemd as its default init system, we'll need to run the following systemctl command to start docker daemon.

$ sudo systemctl start docker

Checking the version

As our docker daemon has been started, we'll now gonna test if its installed and running properly or not by checking the version of docker engine installed in our machine.

$ docker -v

Docker version 1.11.1, build 5604cbe

So, as version 1.11.1 was released and available during the time of writing this article, we must see the above output.

Running Docker Containers

Now, we'll gonna run our first docker container in this step. If everything above is setup and done properly as expected, we'll now be able to run a container. Here in this tutorial, we'll gonna run our all time favorite testing container called Hello World. In order to run hello-world container, we'll need to run the following docker command.

$ docker run hello-world

Hello World Docker

Now, doing this should print an output "Hello from Docker." from the container. This verifies that we have successfully installed docker engine and is capable of running container on it.

In order to check what images where pulled during running the hello-world container, we'll need to run the following docker command.

$ docker images

Managing Docker

As our docker is running successfully, we'll also need to learn how to manage it. In this tutorial, we'll have a look into few basic docker commands which are used to stop, remove, pull a docker container and images.

Stopping a Running Container

Now, if we wanna stop a running container, we'll need to run the following command first to see the list of running containers.

$ docker ps -a

Then, we'll need to run the following docker stop command with the respective container id.

$ docker stop 646ed6509700

Removing a Container

To remove a stopped container, we'll need to run the following command specifying the stopped unused container id.

$ docker rm 646ed6509700

Pulling an Image

In order to pull a docker image, we'll need to run the pull command.

$ docker pull ubuntu

Pulling Docker Ubuntu Image

The above command pulls the latest image of ubuntu from the Docker Registry Hub.

Removing an Image

It is pretty easy to remove a docker container, first we'll need to list the available images in our machine.

$ docker images

Then, we'll run the following command to remove that image.

$ docker rmi ubuntu

Removing Docker Image

We have many commands to manage it, we can see more in the official documentation of Docker.

Conclusion

Docker is an awesome technology enabling us to easily pack, run and ship application independent of platform. It is pretty easy to install and run the latest Docker Engine in the latest Ubuntu release ie Ubuntu 16.04 LTS Xenial. Once the installation is done, we can move further towards managing, networking and more with containers.  So, if you have any questions, suggestions, feedback please write them in the comment box below. Thank you ! Enjoy  :-)

The post How to Install Docker Engine in Ubuntu 16.04 LTS Xenial appeared first on LinOxide.

How to Install ReaR (Relax and Recover) on CentOS 7

$
0
0

ReaR Relax-and-Recover is a Linux bare metal disaster recovery and system migration solution. Relax and Recover (ReaR) is a true disaster recovery solution that creates recovery media from a running Linux system. If a hardware component fails, an administrator can boot the standby system with the ReaR rescue media and put the system back to its previous state. ReaR preserves the partitioning and formatting of the hard disk, the restoration of all data, and the boot loader configuration. ReaR is well suited as a migration tool, because the restoration does not have to take place on the same hardware as the original. It builds the rescue medium with all existing drivers, and the restored system adjusts automatically to the changed hardware.

ReaR even detects changed network cards, as well as different storage scenarios with their respective drivers (migrating IDE to SATA or SATA to CCISS) and modified disk layouts. Relax-and-Recover was designed to be easy to set up, requires no maintenance and is there to assist when disaster strikes. Its setup-and-forget nature removes any excuse for not having a disaster recovery solution implemented, so there is no excuse for not using it.

Prerequisites:

Relax-and-Recover is written entirely in Bash and does not require any external programs. However, the rescue system that is created by Relax-and-Recover requires some programs that are needed to make our rescue system work , that is 'mingetty' and 'sfdisk'. While all other required programs like sort, dd, grep, etc are already present in minimal installation.

Let's start with your system update using below command on your CentOS 7 server.

# yum -y update

Make sure the following dependencies is also installed on your system, else you will get its errors about missing packages.

# yum install syslinux syslinux-extlinux

syslinux extlinux

Install Relax-and-Recover

Many Linux distributions ship Relax-and-Recover as part of their distribution, you can refer to the Relax-and-Recover Download page to get its stable release.

Let run the below 'yum' command to download the rear package.

# yum install rear

The package will be installed after you type 'y' key to continue including its required dependencies.

install relax and recover

You can also start by cloning the Relax-and-Recover sources from Github with below command.

# git clone git://github.com/rear/rear.git

Setup USB Media:

Prepare your USB media that Relax-and-Recover wil be using. Here we are using and external drive which is '/dev/sdb'. You can change '/dev/sdb' to the correct device in your situation.

Run the below command to format all data on that device.

# /usr/sbin/rear format /dev/sdb

Relax-and-recover asks you to confirm if you want to format the device or not, let's type 'Yes' and hit 'Enter'.

USB device /dev/sdb must be formatted with ext2/3/4 or btrfs file system
Please type Yes to format /dev/sdb in ext3 format: Yes

The device has been labeled REAR-000 by the format workflow. Now edit the '/etc/rear/local.conf' configuration file with below configuration.

# vim /etc/rear/local.conf

### write the rescue initramfs to USB and update the USB bootloader
OUTPUT=USB
#
#### create a backup using the internal NETFS method, using 'tar'
BACKUP=NETFS
#
#### write both rescue image and backup to the device labeled REAR-000
BACKUP_URL=usb:///dev/disk/by-label/REAR-000

Create Rescue Image:

Now you are ready to create a rescue image. Let's run the below command with (-v option) to see the verbose output .

# /usr/sbin/rear -v mkrescue

create rescue image

You might want to check the log file for possible errors or see what Relax-and-Recover is doing.

# tail -f /var/log/rear/rear-centos7.log

2016-05-16 00:19:52 Unmounting '/tmp/rear.Ir6gqwz2ROig9on/outputfs'
umount: /tmp/rear.Ir6gqwz2ROig9on/outputfs (/dev/sdb1) unmounted
rmdir: removing directory, '/tmp/rear.Ir6gqwz2ROig9on/outputfs'
2016-05-16 00:19:52 Finished running 'output' stage in 4 seconds
2016-05-16 00:19:52 Finished running mkrescue workflow
2016-05-16 00:19:52 Running exit tasks.
2016-05-16 00:19:52 Finished in 93 seconds
2016-05-16 00:19:52 Removing build area /tmp/rear.Ir6gqwz2ROig9on
rmdir: removing directory, '/tmp/rear.Ir6gqwz2ROig9on'
2016-05-16 00:19:53 End of program reached

Now reboot your system and try to boot from the USB device. If you are able to boot from your second drive then it mean your work is done. You can also check by mounting the other drive. Now let's dive into the advanced Relax-and-Recover options and start creating full backups.

# /usr/sbin/rear -v mkbackup

rear full backup

Rescue system:

Relax-and-Recover will not automatically add itself to the Grub bootloader. It copies itself to your /boot folder. To enable this, add below to your local configuration.

GRUB_RESCUE=1

The entry in the bootloader is password protected. The default password is REAR. Change it in your own 'local.conf' file.

GRUB_RESCUE_PASSWORD="SECRET"

Storing on a central NFS server:

The most straightforward way to store your DR images is using a central NFS server. The configuration below will store both a backup and the rescue CD in a directory on the share.

OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL="nfs://192.168.122.1/nfs/rear/"

Relax-and-Recover Configurations:

To configure Relax-and-Recover you have to edit the configuration files in '/etc/rear/' directory . All *.conf files are part of the configuration, but only 'site.conf' and 'local.conf' are intended for the user configuration. All other configuration files hold defaults for various distributions and should not be changed.

In almost all circumstances you have to configure two main settings and their parameters: The BACKUP method and the OUTPUT method.

The backup method defines, how your data was saved and whether Relax-and-Recover should backup your data as part of the mkrescue process or whether you use an external application, e.g. backup software to archive your data.

The output method defines how the rescue system is written to disk and how you plan to boot the failed computer from the rescue system. You can view in this file '/share/rear/conf/default.conf' for an overview of the possible methods and their options

Using Relax-and-Recover

To use Relax-and-Recover you always call the main script '/usr/sbin/rear' . To get the list of all its available commands that you can use. rune the below command.

# rear help

rear usage

To view/verify your configuration, run 'rear dump'. It will print out the current settings for BACKUP and OUTPUT methods and some system information.

# rear dump

To recover your system, start the computer from the rescue system and run rear recover. Your system will be recovered and you can restart it and continue to use it normally.

Conclusion:

Relax-and-Recover (Rear) is the leading Open Source disaster recovery solution, and successor to mkcdrec. It was designed to be easy to set up and requires no maintenance and assists when disaster strikes. This was a detailed article on rear installation and its use case. Feel free to get back to us in case of any difficulty just leave us your comment or suggestions.

The post How to Install ReaR (Relax and Recover) on CentOS 7 appeared first on LinOxide.

How to Install Chef Workstation / Server / Node on CentOS 7

$
0
0

Chef is an automation platform that configures and manages your infrastruture. It transforms the infrastruture into code. It is a Ruby based configuration management tool. This automation platform consists of a Chef workstation, a Chef server and chef clients which are the nodes managed by the Chef server. All the chef configuration files, recipes, cookbooks, templates etc are created and tested on the Chef workstation and are uploaded to the Chef Server, then it distributes these across every possible nodes registered within the organisations.  It is an ideal automation framework for the Ceph and OpenStack. Not only it gives us complete control but it's super easy to work with.

In this article, I'm explaining the steps I followed for implementing a Chef automation environment on my CentOS 7 servers.

Pre-requisites

  • It is recommended to have a FQDN hostname
  • Chef supports only 64 bit architecture
  • Proper network/Firewall/hosts configurations are recommended

How Chef works?

work procedure

Chef comprises of a workstation which is configured to develop the recipes and cookbooks. It is also configured to run the knife and synchronizes with the chef-repo to keep it up-to-date.  It helps in configuring organizational policy, including defining roles & environments and ensuring that critical data is being stored in data bags. Once these recipes/cookbooks are tested in the workstations, we can upload it to our Chef server. Chef server stores these recipes and assigns on to the nodes depending on their requirements. Basically nodes communicates with only the chef server and takes instructions and recipes from there.

In my demo setup, I'm having three servers namely

  1. chefserver.test20.com         -     Chef Server
  2. chefwork.test20.com           -     Chef Workstation
  3. chefnode.test20.com           -     Chef Node

Let's us start with building Workstation.

Setup a Workstation

First of all, login to our server chefwork, then download the Chef development package. Once the package is downloaded, we can install the package using rpm command.

root@chefwork ~]# wget https://packages.chef.io/stable/el/7/chefdk-0.14.25-1.el7.x86_64.rpm
--2016-05-20 03:47:31-- https://packages.chef.io/stable/el/7/chefdk-0.14.25-1.el7.x86_64.rpm
Resolving packages.chef.io (packages.chef.io)... 75.126.118.188, 108.168.243.150
Connecting to packages.chef.io (packages.chef.io)|75.126.118.188|:443... connected.
HTTP request sent, awaiting response... 302
Location: https://akamai.bintray.com/87/879656c7736ef2a061937c1f45c623e99fd57aaa2f6d802e9799d333d7e5342f?__gda__=exp=1463716772~hmac=ef9ce287129ab2f035449b76a1adc32b7bf8cae37f018f59da5a642d3e2650fc&response-content-disposition=attachment%3Bfilename%3D%22chefdk-0.14.25-1.el7.x86_64.rpm%22&response-content-type=application%2Foctet-stream [following]
--2016-05-20 03:47:32-- https://akamai.bintray.com/87/879656c7736ef2a061937c1f45c623e99fd57aaa2f6d802e9799d333d7e5342f?__gda__=exp=1463716772~hmac=ef9ce287129ab2f035449b76a1adc32b7bf8cae37f018f59da5a642d3e2650fc&response-content-disposition=attachment%3Bfilename%3D%22chefdk-0.14.25-1.el7.x86_64.rpm%22&response-content-type=application%2Foctet-stream
Resolving akamai.bintray.com (akamai.bintray.com)... 104.123.250.232
Connecting to akamai.bintray.com (akamai.bintray.com)|104.123.250.232|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 143927478 (137M) [application/octet-stream]
Saving to: ‘chefdk-0.14.25-1.el7.x86_64.rpm’

100%[====================================================================================================>] 14,39,27,478 2.52MB/s in 55s

2016-05-20 03:48:29 (2.49 MB/s) - ‘chefdk-0.14.25-1.el7.x86_64.rpm’ saved [143927478/143927478]

[root@chefwork ~]# rpm -ivh chefdk-0.14.25-1.el7.x86_64.rpm
warning: chefdk-0.14.25-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:chefdk-0.14.25-1.el7 ################################# [100%]
Thank you for installing Chef Development Kit!

What is ChefDK?

The Chef Development Kit contains everything to start with Chef, along with the tools essential for code managing.

  • It contains a new command-line tool, "chef"
  • The cookbook dependency manager Berkshelf
  • The Test Kitchen integration testing framework.
  • ChefSpec for testing the cookbook syntax
  • Foodcritic, a tool for doing static code analysis on cookbooks.
  • It also has all the Chef tools like Chef Client, Knife, Ohai and Chef Zero

Let's start with creating a some recipes in the Workstation and test it locally to ensure its working.

Create a folder named chef-repo on /root/ and inside that folder we can create our recipes.

[root@chefwork ~]# mkdir chef-repo
[root@chefwork ~]# cd chef-repo

Creating a recipe called hello.rb.
[root@chefwork chef-repo]# vim hello.rb
[root@chefwork chef-repo]#
[root@chefwork chef-repo]# cat hello.rb
file '/etc/motd' do
content 'Welcome to Chef'
end

This recipe hello.rb creates a file named /etc/motd with content "Welcome to Chef". This recipe make use of the resource file to enhance this task. Now we can run this recipe to check its working.

[root@chefwork chef-repo]# chef-apply hello.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* file[/etc/motd] action create (up to date)

Confirm the recipe execution:

[root@chefwork chef-repo]# cat /etc/motd
Welcome to Chef

Deleting the file

We can modify our recipe file to delete the created file and run using the command chef-apply as below:

[root@chefwork chef-repo]# cat hello.rb
file '/etc/motd' do
action :delete
end

[root@chefwork chef-repo]# chef-apply hello.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* file[/etc/motd] action delete
- delete file /etc/motd

Installing a package

We're modifying our recipe file to install httpd package on our server and copy an index.html file to the default document root to confirm the installation. The package and the service resources are used to implement this. Default action for a package resource is installation, hence we needn't specify that action separately.

[root@chefwork chef-conf]# cat hello.rb
package 'httpd'
service 'httpd' do
action [:enable, :start]
end

file '/var/www/html/index.html' do
content 'Welcome to Apache in Chef'
end
[root@chefwork chef-conf]# chef-apply hello.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* yum_package[httpd] action install
- install version 2.4.6-40.el7.centos.1 of package httpd
* service[httpd] action enable
- enable service service[httpd]
* service[httpd] action start
- start service service[httpd]
* file[/var/www/html/index.html] action create (up to date)

The command execution clearly describes each instance in the recipe. It installs the Apache package , enables and starts the httpd service on the server. And it creates an index.html file in the default document root with the content "Welcome to Apache in Chef". So we can verify it by running the server IP in the browser.

welcomepage_httpd

Creating Cookbooks

Now we can create our first cookbook, create a folder called chef-repo under the /root directory and execute the command "chef generate cookbook [cookbook name]" to generate our cookbook.

root@chefwork chef-repo]# mkdir cookbooks
[root@chefwork chef-repo]# cd cookbooks/
[root@chefwork cookbooks]# chef generate cookbook httpd_deploy
Installing Cookbook Gems:
Compiling Cookbooks...
Recipe: code_generator::cookbook
* directory[/root/chef-repo/cookbook/httpd_deploy] action create
- create new directory /root/chef-repo/cookbook/httpd_deploy

 

cookbook filestructure

 

This is the file structure of the created cookbook, let's see the use of these  files/folders inside the cookbook one by one.

Berksfile : It is the configuration file, which mainly tells BerkShelf what are the cookbook's dependencies, which can be specified directly inside this file or indirectly through metadata.rb. It also tells Berkshelf where it should look for those dependencies.

Chefignore : It tells Chef which all files should be ignored while uploading a cookbook to the Chef server.

metadata.rb : It contains meta information about you cookbook, such as name, contacts or description. It can also state the cookbook’s dependencies.

README.md : It contains documentation entry point for the repo.

Recipes : Contains the cookbook's recipes. It starts with executing the file default.rb.

default.rb : The default recipe format.

specs : It will be storing the unit test cases of your libraries.

test : It will be storing the unit test cases of your recipes.

Creating a template

Next we are going to create a template file for ourselves. Earlier, we created a file with some contents, but that can't be fit in with our recipes and cookbook structures. so let's see how we can create a template.

[root@chefwork cookbook]# chef generate template httpd_deploy index.html
Installing Cookbook Gems:
Compiling Cookbooks...
Recipe: code_generator::template
* directory[./httpd_deploy/templates/default] action create
- create new directory ./httpd_deploy/templates/default
* template[./httpd_deploy/templates/default/index.html.erb] action create
- create new file ./httpd_deploy/templates/default/index.html.erb
- update content in file ./httpd_deploy/templates/default/index.html.erb from none to e3b0c4
(diff output suppressed by config)

 

template

Now if you see our cookbook file structure, there is a folder created with the name template with index.html.erb file. We can edit our index.html.erb template file and add to our recipe as below:

root@chefwork default]# cat index.html.erb
Welcome to Chef Apache Deployment
[root@chefwork default]# pwd
/root/chef-repo/cookbook/httpd_deploy/templates/default

Creating the recipe with this template

[root@chefwork recipes]# pwd
/root/chef-repo/cookbook/httpd_deploy/recipes
[root@chefwork recipes]# cat default.rb
#
# Cookbook Name:: httpd_deploy
# Recipe:: default
#
# Copyright (c) 2016 The Authors, All Rights Reserved.
package 'httpd'
service 'httpd' do
action [:enable, :start]
end

template '/var/www/html/index.html' do
source 'index.html.erb'
end

Now go back to our chef-repo folder and run/test our recipe on our Workstation.

[root@chefwork chef-repo]# chef-client --local-mode --runlist 'recipe[httpd_deploy]'
[2016-05-20T05:44:40+00:00] WARN: No config file found or specified on command line, using command line options.
Starting Chef Client, version 12.10.24
resolving cookbooks for run list: ["httpd_deploy"]
Synchronizing Cookbooks:
- httpd_deploy (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 3 resources
Recipe: httpd_deploy::default
* yum_package[httpd] action install
- install version 2.4.6-40.el7.centos.1 of package httpd
* service[httpd] action enable
- enable service service[httpd]
* service[httpd] action start
- start service service[httpd]
* template[/var/www/html/index.html] action create
- update content in file /var/www/html/index.html from 152204 to 748cbd
--- /var/www/html/index.html 2016-05-20 04:18:38.553231745 +0000
+++ /var/www/html/.chef-index.html20160520-20425-1bez4qs 2016-05-20 05:44:47.344848833 +0000
@@ -1,2 +1,2 @@
-Welcome to Apache in Chef
+Welcome to Chef Apache Deployment

Running handlers:
Running handlers complete
Chef Client finished, 4/4 resources updated in 06 seconds

[root@chefwork chef-repo]# cat /var/www/html/index.html
Welcome to Chef Apache Deployment

According to our recipe, Apache is installed on our workstation, service is being started and enabled on boot. And a template file has been created  on our default document root.

Now we've tested our Workstation. It's time for the Chef server setup.

Setting up the Chef Server

First of all login to our Chef server "chefserver.test20.com" and download the chef server package combatible with our OS version.

[root@chefserver ~]# wget https://packages.chef.io/stable/el/7/chef-server-core-12.6.0-1.el7.x86_64.rpm
--2016-05-20 07:23:46-- https://packages.chef.io/stable/el/7/chef-server-core-12.6.0-1.el7.x86_64.rpm
Resolving packages.chef.io (packages.chef.io)... 75.126.118.188, 108.168.243.150
Connecting to packages.chef.io (packages.chef.io)|75.126.118.188|:443... connected.
HTTP request sent, awaiting response... 302
Location: https://akamai.bintray.com/5a/5a36d0ffa692bf788e90315171582a758d4c5d8033a892dca9a81d3c03c44d14?__gda__=exp=1463729747~hmac=86e28bf2d5197154c84b571330b4c897006c2cb7f14cc9fc386c62d8b6e34c2d&response-content-disposition=attachment%3Bfilename%3D%22chef-server-core-12.6.0-1.el7.x86_64.rpm%22&response-content-type=application%2Foctet-stream [following]
--2016-05-20 07:23:47-- https://akamai.bintray.com/5a/5a36d0ffa692bf788e90315171582a758d4c5d8033a892dca9a81d3c03c44d14?__gda__=exp=1463729747~hmac=86e28bf2d5197154c84b571330b4c897006c2cb7f14cc9fc386c62d8b6e34c2d&response-content-disposition=attachment%3Bfilename%3D%22chef-server-core-12.6.0-1.el7.x86_64.rpm%22&response-content-type=application%2Foctet-stream
Resolving akamai.bintray.com (akamai.bintray.com)... 23.15.249.68
Connecting to akamai.bintray.com (akamai.bintray.com)|23.15.249.68|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 481817688 (459M) [application/octet-stream]
Saving to: ‘chef-server-core-12.6.0-1.el7.x86_64.rpm’

100%[====================================================================================================>] 48,18,17,688 2.90MB/s in 3m 53s

[root@chefserver ~]# rpm -ivh chef-server-core-12.6.0-1.el7.x86_64.rpm
warning: chef-server-core-12.6.0-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:chef-server-core-12.6.0-1.el7 ################################# [100%]

Now our Chef server is installed. But we need to reconfigure the Chef server to enable and start all the services which is composed in the Chef server. We can run this command to reconfigure.

root@chefserver ~]# chef-server-ctl reconfigure
Starting Chef Client, version 12.10.26
resolving cookbooks for run list: ["private-chef::default"]
Synchronizing Cookbooks:
- enterprise (0.10.0)
- apt (2.9.2)
- yum (3.10.0)
- openssl (4.4.0)
- chef-sugar (3.3.0)
- packagecloud (0.0.18)
- runit (1.6.0)
- private-chef (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
[2016-05-19T02:38:37+00:00] WARN: Chef::Provider::AptRepository already exists! Cannot create deprecation class for LWRP provider apt_repository from cookbook apt
Chef Client finished, 394/459 resources updated in 04 minutes 05 seconds
Chef Server Reconfigured!

Please confirm the service status and their pids by running this command.

[root@chefserver ~]# chef-server-ctl status
run: bookshelf: (pid 6140) 162s; run: log: (pid 6156) 162s
run: nginx: (pid 6051) 165s; run: log: (pid 6295) 156s
run: oc_bifrost: (pid 5987) 167s; run: log: (pid 6022) 167s
run: oc_id: (pid 6038) 165s; run: log: (pid 6042) 165s
run: opscode-erchef: (pid 6226) 159s; run: log: (pid 6214) 161s
run: opscode-expander: (pid 6102) 162s; run: log: (pid 6133) 162s
run: opscode-solr4: (pid 6067) 164s; run: log: (pid 6095) 163s
run: postgresql: (pid 5918) 168s; run: log: (pid 5960) 168s
run: rabbitmq: (pid 5876) 168s; run: log: (pid 5869) 169s
run: redis_lb: (pid 5795) 290s; run: log: (pid 6280) 156s

Hurray!! Our Chef Server is ready :). Now we can install the management console to get an web interface to manage our Chef server.

Installing Management Console for Chef Server

We can install the management console by just running this command "chef-server-ctl install chef-manage" from the chef server.

[root@chefserver ~]# chef-server-ctl install chef-manage
Starting Chef Client, version 12.10.26
resolving cookbooks for run list: ["private-chef::add_ons_wrapper"]
Synchronizing Cookbooks:
- enterprise (0.10.0)
- apt (2.9.2)
- yum (3.10.0)
- openssl (4.4.0)
- runit (1.6.0)
- chef-sugar (3.3.0)
- packagecloud (0.0.18)
- private-chef (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 4 resources
Recipe: private-chef::add_ons_wrapper
* ruby_block[addon_install_notification_chef-manage] action nothing (skipped due to action :nothing)
* remote_file[/var/opt/opscode/local-mode-cache/chef-manage-2.3.0-1.el7.x86_64.rpm] action create
- create new file /var/opt/opscode/local-mode-cache/chef-manage-2.3.0-1.el7.x86_64.rpm
- update content in file /var/opt/opscode/local-mode-cache/chef-manage-2.3.0-1.el7.x86_64.rpm from none to 098cc4
(file sizes exceed 10000000 bytes, diff output suppressed)
* ruby_block[locate_addon_package_chef-manage] action run
- execute the ruby block locate_addon_package_chef-manage
* yum_package[chef-manage] action install
- install version 2.3.0-1.el7 of package chef-manage
* ruby_block[addon_install_notification_chef-manage] action create
- execute the ruby block addon_install_notification_chef-manage

Running handlers:
-- Installed Add-On Package: chef-manage
- #<Class:0x00000006032b80>::AddonInstallHandler
Running handlers complete
Chef Client finished, 4/5 resources updated in 02 minutes 39 seconds

After installing the management console, we need to reconfigure the chef server to restart the chef server and its services to update these changes.

[root@chefserver ~]# opscode-manage-ctl reconfigure
To use this software, you must agree to the terms of the software license agreement.
Press any key to continue.
Type 'yes' to accept the software license agreement, or anything else to cancel.
yes
Starting Chef Client, version 12.4.1
resolving cookbooks for run list: ["omnibus-chef-manage::default"]
Synchronizing Cookbooks:
- omnibus-chef-manage
- chef-server-ingredient
- enterprise
Recipe: omnibus-chef-manage::default
* private_chef_addon[chef-manage] action create (up to date)
Recipe: omnibus-chef-manage::config
Running handlers:
Running handlers complete
Chef Client finished, 62/79 resources updated in 44.764229437 seconds
chef-manage Reconfigured!

[root@chefserver ~]# chef-server-ctl reconfigure

Now our Management console is ready, we need to setup our admin user to manage our Chef Server.

Creating Admin user/Organization

I've created the admin user named chefadmin with an organization linox on my chef server to manage it. We can create the user using the chef command chef-server-ctl user-create and organization using the command chef-server-ctl org-create.

root@chefserver ~]# chef-server-ctl user-create chefadmin saheetha shameer saheetha@gmail.com 'chef123' --filename /root/.chef/chefadmin.pem
[root@chefserver ~]#

[root@chefserver .chef]# chef-server-ctl org-create linox Chef Linoxide --association_user chefadmin --filename /root/.chef/linoxvalidator.pem

Our keys are saved inside the folder /root/.chef folder. We need to copy these keys from the Chef server to the Work station to initiate the communication between our Chef server and workstation.

Copying the Keys

I'm copying my user and validator keys from the Chef server to the workstation to enhance the connection between the servers.

[root@chefserver .chef]# scp chefadmin.pem root@139.162.35.39:/root/chef-repo/.chef/
The authenticity of host '139.162.35.39 (139.162.35.39)' can't be established.
ECDSA key fingerprint is 5b:0b:07:85:9a:fb:b6:59:51:07:7f:14:1b:07:07:f0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '139.162.35.39' (ECDSA) to the list of known hosts.
root@139.162.35.39's password:
chefadmin.pem 100% 1678 1.6KB/s 00:00
[root@chefserver .chef]#

[root@chefserver .chef]# scp linoxvalidator.pem root@139.162.35.39:/root/chef-repo/.chef/
The authenticity of host '139.162.35.39 (139.162.35.39)' can't be established.
ECDSA key fingerprint is 5b:0b:07:85:9a:fb:b6:59:51:07:7f:14:1b:07:07:f0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '139.162.35.39' (ECDSA) to the list of known hosts.
root@139.162.35.39's password:
linoxvalidator.pem 100% 1678 1.6KB/s 00:00
[root@chefserver .chef]#

Now login to our Management console for our Chef server with the user/password  "chefadmin" created.

chef_management console

It'll ask to create an organization from the Panel on Sign up. Just create a different one.

Download the Starter Kit for WorkStation

Choose any of your organization and download the Starter Kit from the Chef Server to our Work Station.

starterdownloadStarter

After downloading this kit. Move it your Workstation /root folder and extract. This provides you with a default Starter Kit to start up with your Chef server. It includes a chef-repo.

root@chefwork ~]# ls
chef-starter.zip hello.rb
[root@chefwork~]# unzip chef-starter.zip
Archive: chef-starter.zip
creating: chef-repo/cookbooks/
creating: chef-repo/cookbooks/starter/
creating: chef-repo/cookbooks/starter/recipes/
inflating: chef-repo/cookbooks/starter/recipes/default.rb
creating: chef-repo/cookbooks/starter/files/
creating: chef-repo/cookbooks/starter/files/default/
inflating: chef-repo/cookbooks/starter/files/default/sample.txt
creating: chef-repo/cookbooks/starter/templates/
creating: chef-repo/cookbooks/starter/templates/default/
inflating: chef-repo/cookbooks/starter/templates/default/sample.erb
inflating: chef-repo/cookbooks/starter/metadata.rb
creating: chef-repo/cookbooks/starter/attributes/
inflating: chef-repo/cookbooks/starter/attributes/default.rb
inflating: chef-repo/cookbooks/chefignore
inflating: chef-repo/README.md
inflating: chef-repo/.gitignore
creating: chef-repo/.chef/
creating: chef-repo/roles/
inflating: chef-repo/.chef/knife.rb
inflating: chef-repo/roles/starter.rb
inflating: chef-repo/.chef/chefadmin.pem
inflating: chef-repo/.chef/ln_blog-validator.pem

chef-repo

This is the file structure for the downloaded Chef repository. It contains all the required file structures to start with.

Cookbook SuperMarket

Chef cookbooks are available in the Cookbook Super Market, we can go to the Chef SuperMarket here. Download the required cookbooks from there. I'm downloading one of the cookbook to install Apache from there.

root@chefwork chef-repo]# knife cookbook site download learn_chef_httpd
Downloading learn_chef_httpd from Supermarket at version 0.2.0 to /root/chef-repo/learn_chef_httpd-0.2.0.tar.gz
Cookbook saved: /root/chef-repo/learn_chef_httpd-0.2.0.tar.gz

Extract this cookbook inside the "cookbooks" folder.

[root@chefwork chef-repo]# tar -xvf learn_chef_httpd-0.2.0.tar.gz

learn

All the required files are automatically created under this cookbook. We didn't require to make any modifications. Let's check our recipe description inside our recipe folder.

[root@chefwork recipes]# cat default.rb
#
# Cookbook Name:: learn_chef_httpd
# Recipe:: default
#
# Copyright (C) 2014
#
#
#
package 'httpd'

service 'httpd' do
action [:enable, :start]
end

template '/var/www/html/index.html' do
source 'index.html.erb'
end

service 'iptables' do
action :stop
end
[root@chefwork recipes]#
[root@chefwork recipes]# pwd
/root/chef-repo/cookbooks/learn_chef_httpd/recipes
[root@chefwork recipes]#

So we just need to upload this cookbook to our Chef server as it looks perfect.

Validating the Connection b/w Server and Workstation

Before uploading the cookbook, we need to check and confirm the connection between our Chef server and Workstation. First of all, make sure you've proper Knife configuration file.

[root@chefwork .chef]# cat knife.rb
current_dir = File.dirname(__FILE__)
log_level :info
log_location STDOUT
node_name "chefadmin"
client_key "#{current_dir}/chefadmin.pem"
validation_client_name "linox-validator"
validation_key "#{current_dir}/linox-validator.pem"
chef_server_url "https://chefserver.test20.com:443/organizations/linox"

cookbook_path ["#{current_dir}/../cookbooks"]

This configuration file is location at /root/chef-repo/.chef folder. The highlighted portions are the main things to take care. Now you can run this command to check the connections.

root@chefwork .chef]# knife client list
ERROR: SSL Validation failure connecting to host: chefserver.test20.com - SSL_connect returned=1 errno=0 state=error: certificate verify failed
ERROR: Could not establish a secure connection to the server.
Use `knife ssl check` to troubleshoot your SSL configuration.
If your Chef Server uses a self-signed certificate, you can use
`knife ssl fetch` to make knife trust the server's certificates.

Original Exception: OpenSSL::SSL::SSLError: SSL Error connecting to https://chefserver.test20.com/clients - SSL_connect returned=1 errno=0 state=error: certificate verify failed

You can see an SSL error reporting. In order to rectify this error, we need to fetch the SSL certificate for our Chef Server and store it inside the /root/.chef/trusted_certs folder. We can do this by running this command.

root@chefwork .chef]# knife ssl fetch
WARNING: Certificates from chefserver.test20.com will be fetched and placed in your trusted_cert
directory (/root/chef-repo/.chef/trusted_certs).

Knife has no means to verify these are the correct certificates. You should
verify the authenticity of these certificates after downloading.

Adding certificate for chefserver.test20.com in /root/chef-repo/.chef/trusted_certs/chefserver_test20_com.crt

Verifying the SSL:

[root@chefwork .chef]# knife ssl check
Connecting to host chefserver.test20.com:443
Successfully verified certificates from `chefserver.test20.com'

[root@chefwork .chef]# knife client list
chefnode
linox-validator
[root@chefwork .chef]# knife user list
chefadmin

Uploading the Cookbook

We can upload our cookbook to our chef server from the workstation using the knife command as below:

#knife cookbook upload learn_chef_httpd

[root@chefwork cookbooks]# knife cookbook upload learn_chef_httpd
Uploading learn_chef_httpd [0.2.0]
Uploaded 1 cookbook.

Verify the cookbook from the Chef Server Management console.

uploadedcookbook

 

Adding a Node

This is the final step in the Chef implementation. We've setup a workstation, a Chef server and then now we need to add our clients to the Chef server for automation. I'm adding my chefnode to the server using the knife bootstrap command as below:

[root@chefwork cookbooks]# knife bootstrap 45.33.76.60 --ssh-user root --ssh-password dkfue@321 --node-name chefnode
Creating new client for chefnode
Creating new node for chefnode
Connecting to 45.33.76.60
45.33.76.60 -----> Installing Chef Omnibus (-v 12)
45.33.76.60 downloading https://omnitruck-direct.chef.io/chef/install.sh
45.33.76.60 to file /tmp/install.sh.5457/install.sh
45.33.76.60 trying wget...
45.33.76.60 el 7 x86_64
45.33.76.60 Getting information for chef stable 12 for el...
45.33.76.60 downloading https://omnitruck-direct.chef.io/stable/chef/metadata?v=12&p=el&pv=7&m=x86_64
45.33.76.60 to file /tmp/install.sh.5466/metadata.txt
45.33.76.60 trying wget...
45.33.76.60 sha1 4def83368a1349959fdaf0633c4d288d5ae229ce
45.33.76.60 sha256 6f00c7bdf96a3fb09494e51cd44f4c2e5696accd356fc6dc1175d49ad06fa39f
45.33.76.60 url https://packages.chef.io/stable/el/7/chef-12.10.24-1.el7.x86_64.rpm
45.33.76.60 version 12.10.24
45.33.76.60 downloaded metadata file looks valid...
45.33.76.60 downloading https://packages.chef.io/stable/el/7/chef-12.10.24-1.el7.x86_64.rpm
45.33.76.60 to file /tmp/install.sh.5466/chef-12.10.24-1.el7.x86_64.rpm
45.33.76.60 trying wget...
45.33.76.60 Comparing checksum with sha256sum...
45.33.76.60 Installing chef 12
45.33.76.60 installing with rpm...
45.33.76.60 warning: /tmp/install.sh.5466/chef-12.10.24-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
45.33.76.60 Preparing... ################################# [100%]
45.33.76.60 Updating / installing...
45.33.76.60 1:chef-12.10.24-1.el7 ################################# [100%]
45.33.76.60 Thank you for installing Chef!
45.33.76.60 Starting the first Chef Client run...
45.33.76.60 Starting Chef Client, version 12.10.24
45.33.76.60 resolving cookbooks for run list: []
45.33.76.60 Synchronizing Cookbooks:
45.33.76.60 Installing Cookbook Gems:
45.33.76.60 Compiling Cookbooks...
45.33.76.60 [2016-05-20T15:36:41+00:00] WARN: Node chefnode has an empty run list.
45.33.76.60 Converging 0 resources
45.33.76.60
45.33.76.60 Running handlers:
45.33.76.60 Running handlers complete
45.33.76.60 Chef Client finished, 0/0 resources updated in 08 seconds
[root@chefwork chef-repo]#

This command will also initialize the installation of the Chef-client in the Chef node. You can verify it from the CLI on the workstation using the knife commands below:

[root@chefwork chef-repo]# knife node list
chefnode

[root@chefwork chef-repo]# knife node show chefnode
Node Name: chefnode
Environment: _default
FQDN: chefnode.test20.com
IP: 45.33.76.60
Run List: recipe[learn_chef_httpd]
Roles:
Recipes:
Platform: centos 7.2.1511
Tags:

Verifying it from the Management console.

added nodechef

We can get more information regarding the added node by selecting the node and viewing the Attributes section.

node details

Managing Node Run List

Let's see how we can add a cookbook to the node and manage its runlist from the Chef server. As you see in the screenshot, you can click the Actions tab and select the Edit Runlist option to manage the runlist.

node_run

In the Available Recipes,  you can see our learn_chef_httpd recipe, you can drag that from the available packages to the current run list and save the runlist.

drag_recipe

Now login to your node and just run the command chef-client to execute your runlist.

root@chefnode ~]# chef-client
Starting Chef Client, version 12.10.24
resolving cookbooks for run list: ["learn_chef_httpd"]
Synchronizing Cookbooks:
- learn_chef_httpd (0.2.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 4 resources
Recipe: learn_chef_httpd::default
* yum_package[httpd] action install

Similarly, we can add any number of nodes to your Chef Server depending on its configuration and hardware. I hope this article provided you with the basic understanding of Chef implementation. I would recommend your valuable comments and suggestions on this. Thank you for reading this :)

Happy Automation with Chef!!

The post How to Install Chef Workstation / Server / Node on CentOS 7 appeared first on LinOxide.

Viewing all 382 articles
Browse latest View live