Quantcast
Channel: LINUX HOWTO – LinOxide
Viewing all 382 articles
Browse latest View live

How to Configure nftables to Serve Internet

$
0
0

Hello everyone! This time I will show how to install nftables on a Linux box to serve as firewall and internet gateway. How to build the Linux kernel with nftables enables, how to install nftables use-space and it's dependencies and how to the nft utility to perform network filtering and IP address translation.

The nftables project is intended to replace the current netfilter tools such as iptables, ebtables, arptables and the kernel-space infrastructure with a renewed one and a user-space tool, nft, which has a simplified and cleaner syntax, but maintains the essence of the tools that we use nowadays.

Check your kernel

Nftables is on Linux kernel tree since kernel 3.13 and you need just to enable symbols relative to nftables using usual kernel config tools and build it. However, the masquerade and redirect network address translation targets, were introduced in kernel 3.18 and 3.19 respectively and they are desired for NAT.

Get your kernel release number with the following command

uname -r

To check if nf_tables module is already compiled try this

modinfo nf_tables

You should see information relevant to the module, but if you get an error, you will need another kernel.

Building a nftables compatible kernel

Let's compile kernel 4.2, it is the latest stable kernel while i write this and has all we need for Nftables.

Enter /usr/src

cd /usr/src

Download xz package of the Linux kernel from kernel.org

wget --no-check-certificate https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.2.tar.xz

Extract the sources on the xz package

tar Jxvf linux-4.2.tar.xz

Move your old Linux kernel tree

mv linux linux-old

Create a link to the new Linux tree

ln -s linux-4.2 linux

Copy your old .config to the new kernel tree

cp linux-old/.config linux/.config

And then enter the Linux kernel tree

cd linux

Now prepare your old .config for the new kernel with the olddefconfig target, which maintain your current kernel settings and set new symbols to default.

make olddefconfig

Now, use the menuconfig option to navigate through the curses-like menu and follow  options, that are related to nftables

make menuconfig

Networking support

menuconfig - network support

Networking options

Networking options

Network packet filtering framework (Netfilter)

Network packet filtering framework

Core Netfilter Configuration

 

Enter core Netfilter settings

Enable Netfilter nf_tables support and related modules

Enable nftables and related modules

Enable nftables and related modules

Now go up one level, back to main Netfilter settings and enter IP:Netfilter Configuration

Enter IPv4 Netfilter settings

Enter IPv4 Netfilter settings

There you enable NAT chain for nf_tables and also masquerading and redirect targets.

Enable Nftables NAT support for IPv4

Enable Nftables NAT support for IPv4

You are now done with nftables, remember to check if any kernel setting relative to your specific needs are not missing and save your .config

Then make and make the modules

make && make modules

Install your kernel to /boot manually, so you can use your old kernel if you miss something goes wrong.

cp arch/x86_64/boot/bzImage /boot/vmlinuz-4.2

cp system.map /boot/system.map-4.2

cp .config /boot/config-4.2

Install kernel modules

make modules_install

Boot

Some setups may need an initial ramdisk to boot, it will be the case if your root partition is under LVM,  RAID or the root filesystem's module was not built in the kernel.

The following example creates the compressed ramdisk file /boot/initrd-4.2gz, which will wait 8 seconds to boot  on the rootfs partition of vgroup logical volume group, it will load the modules for XFS and Ext4 filesystems from the kernel 4.2.0

mkinitrd -w 8 -c -u -f ext4 -m ext4:xfs -L -r /dev/vgroup/rootfs -k 4.2.0 -o /boot/initrd-4.2.gz

Add a new option to your bootloader pointing to your kernel and ramdisk, if you have one; on LILO you should add something like this in your /etc/lilo.conf

image     = /boot/vmlinuz-4.2
root     = /dev/vgroup/rootfs
label     = linux-4.2
initrd     = /boot/initrd-4.2.gz
read-only

Once your system reboot, check your module again.

modinfo nf_tables

modinfo nf_tables

modinfo nf_tables

You should see something similar to the image above, otherwise, try to review menuconfig the steps above and try to mark all netfilter related symbols as modules.

After that, make and install those modules

make modules && make modules install

Install nft tool

Now it is time to install Nftables user-space utility, nft, the replacement for the traditional iptables and its friends, but before we can do that, we need to install the required shared libraries to build nft itself.

GMP - The GNU Multiple Precision Arithmetic Library

Download and extract the package

wget https://gmplib.org/download/gmp/gmp-6.0.0a.tar.xz tar Jxvf gmp-*

Build and install

cd gmp* && ./configure && make && make install

libreadline - The GNU Readline Library

You will need this library if you plan to use nft in interactive mode, which is optional not covered here.

Download, extract and enter source tree.

wget ftp://ftp.gnu.org/gnu/readline/readline-6.3.tar.gz && tar zxvf readline* && cd readline*

Configure it to use ncurses, then make and install.

./configure --with-curses && make && make install

libmnl - Minimalistic user-space library for Netlink developers

Download, extact and enter source tree

wget http://www.netfilter.org/projects/libmnl/files/libmnl-1.0.3.tar.bz2 && tar jxvf libmnl-* && cd libmnl-*

Configure, make and install

./configure && make && make install

libnftnl

Download, extract and enter source tree

wget http://www.netfilter.org/projects/libnftnl/files/libnftnl-1.0.3.tar.bz2 && tar jxvf libnftnl* && cd libnftnl*

Configure make and install.

./configure && make && make install

Build and install nft

Download, extract and enter source tree.

wget http://www.netfilter.org/projects/nftables/files/nftables-0.4.tar.bz2 && tar jxvf nftables*

Then configure, make and install

./configure && make && make install

Note that you can use --without-cli flag for the configure script, it will disable the interactive command line interface and the need of readline library.

Using nftables

First thing you can do, is to load the basic template tables for IPv4 networking, which can be found on the nft tool source tree, of course you can do it by hand, but remember that it is always a good idea do start simple.

Load IPv4 filter table definitions

nft -f files/nftables/ipv4-filter

Load NAT table

nft -f files/nftables/ipv4-nat

It is a good idea to load also mangle

nft -f files/nftables/ipv4-mangle

Now list your tables

nft list tables

Drop any new packet addressed to this machine

nft add rule filter input ct state new drop

Accept packets that are from ot related to established connections

nft add rule filter input ct state related,established accept

Most Linux systems runs OpenSSH, it is a good idea to accept connections to the TCP port 22, so you can access your SSH service.

nft insert rule filter input tcp port 22 accept

Now list you tables and take a look on how things are going

nft list table filter

Performing Network Address Translation (NAT)

Create a  rule to translate the IP address coming from the network 192.168.1.0/24 and count it before sending.

nft add rule nat postrouting ip saddr 192.168.1.0/24 counter masquerade

Take a look at your rules, this time append the '-a' flag to get more details and you will see

nft list table nat -a

Enable forwarding

You will also need to enable IP forwarding on the kernel

sysctl -w net.ipv4.ip_forward=1

To enable forwarding on startup, put the following sentence in the /etc/sysctl.conf file, which may need to be created on some distros.

net.ipv4.ip_forward=1

You can also enable forwarding through the proc filesystem, run the following command to do so and put it at the end of an rc script like rc.local to enable forwarding on startup

echo 1 > /proc/sys/net/ipv4/ip_forward

Saving your tables

To save your settings, just redirect the output of the listing command to an file

Save filter table

nft list table filter -a > /etc/firewall.tables

Now append the nat table, note that we use the '>' two times.

nft list table nat -a >> /etc/firewall.tables

Then append mangle table

nft list table mangle -a >> /etc/firewall.tables

Now you just need to load this file when your system starts

nft -f /etc/firewall.tables

Conclusion

Your Linux machine is now able to serve internet, all you have to do now is to point your Linux machine as gateway for your devices to share your internet. Of course there is a lot of other details and features on nftables, but it should be enough for you to understand the basics, protect your systems, share internet and prepare to say goodbye to iptables and family.

The post How to Configure nftables to Serve Internet appeared first on LinOxide.


How to Setup ZPanel CP on Linux CentOS 6

$
0
0

Today we are going to show you about one of the most important solution of web hosting control panel which is more than an open source application that works on Windows and Linux that is ZPanel. Its written in PHP and uses different other open source software packages to provide secure web hosting control panel. Using ZPanel you can use it to manage every aspect of your web server, including email accounts, MySQL databases, domains, FTP, DNS and other advanced configurations like Cron jobs.

The installation setup of ZPanel is extremely easy to getting it up and running. ZPanel provides its installation script that does everything you need to run on your own web server, so all you need is a blank server with CentOS 6 installed on it, and ZPanel will do the rest of everything else.

System Preparation

ZPanel is a very lightweight web hosting control panel in regard of resource usage as compared to other hosting control panels but it is strongly recommended to have at least 512MB of RAM for better performance.

Zpanel does not support CentOS 6.6 and above as yet so we will be using CentOS 6.5 here. So, first of all prepare a fresh CentOS 6.5 version and log in to your server via SSH with root user and stop any other web services and database service are running on it. Then remove their installation packages and make it clean so that no other web or database services are running on it.

Once you are ready with clean installation of CentOS 6.5, first apply all updates with below command.

# yum update

Download ZPanel Installer

Let’s download ZPanel latest available installer from their official web link zPanel Download Page .

zpanel download

Copy the link to get the latest available supported installation script for centos 6 and download it using the below command in your server’s Secure Shell.

# wget https://raw.github.com/zpanel/installers/master/install/CentOS-6_4/10_1_1.sh

zpanel install package

Starting ZPanel Installation

Before starting the installation through Zpanel installer script, make sure that it has executable permissions. You can check and and assign it executable permissions as shown in below image.

change permissions

Now execute the ZPanel installation script using the below command.

# ./10_1_1.sh

This will check for the installed packages and detect the version of your supported operating system. So, once everything is according the its requirements you will be greeted with Welcome screen of Official ZPanel Installer as below.

##############################################################
# Welcome to the Official ZPanelX Installer for CentOS 6.4 #
# #
# Please make sure your VPS provider hasn't pre-installed #
# any packages required by ZPanelX. #
# #
# If you are installing on a physical machine where the OS #
# has been installed by yourself please make sure you only #
# installed CentOS with no extra packages. #
# #
# If you selected additional options during the CentOS #
# install please consider reinstalling without them. #
# #
##############################################################

Then press “Y” key to continue and choose the continent, your country name and the time zone as followed by the instructions provided during the installation.

The following information has been given:

Britain (UK)

Therefore TZ='Europe/London' will be used.
Local time is now: Sat Aug 22 01:13:45 BST 2015.
Universal Time is now: Sat Aug 22 00:13:45 UTC 2015.
Is the above information OK?
1) Yes
2) No
#? 1

You can make this change permanent for yourself by appending the line
TZ='Europe/London'; export TZ
to the file '.profile' in your home directory; then log out and log in again.

Here is that TZ value again, this time on standard output so that you
can use the /usr/bin/tzselect command in shell scripts:
Europe/London

After that you have to provide the FQDN (fully qualified domain name) that will be used to access the server. Let’s provide the appropriate settings and then type “Y” to continue as shown below.

Enter the FQDN you will use to access ZPanel on your server.
- It MUST be a sub-domain of you main domain, it MUST NOT be your main domain only. Example: panel.yourdomain.com
- Remember that the sub-domain ('panel' in the example) MUST be setup in your DNS nameserver.
FQDN for zpanel: zpanel-cp.linoxide.com
Enter the public (external) server IP: 19.19.23.12
ZPanel is now ready to install, do you wish to continue (y/n) y

Now during the installation process on your machine that will take a while in their setup as it will install the following packages

  • ZPanel
  • Apache
  • MySQL
  • PHP
  • Bind
  • Postfix

After successful completeion of its packages it will restart services of all of its installed packages and will greet with its generated password for MySQL, Postfix and Zpanel User name and Password. So, save all of this valuable information and type “Y” for Restart your server to complete the install process.

zpanel installation

One your server is back, open your web browser to access ZPanel Web Login, using your server’s IP.

http://your_servers_ip/

Provide your login detail a above generated after completion of installation.

zpanel login

Welcome to ZPnale Control Panel

Once you have provided your true credentials you will be directed to the Zpanel Dashboard, where you will be able to perform any kind of your web host tasks, whether you want to configure your emails, manage your database or domains.

zpanelcp dashboard

Conclusion

ZPanel is the best looking, simplest to use, and by far the easiest to setup. However, it will be more than enough for the vast majority of users who require a free web server control panel. Hope you enjoyed this tutorial feel free to comment us about your feedback and suggestions.

The post How to Setup ZPanel CP on Linux CentOS 6 appeared first on LinOxide.

How to Install and Configure ISPConfig CP on CentOS 7.x

$
0
0

When we talk about web hosting or want to manage one or multiple web sites through a user friendly web interface then there comes different web hosting control panels some of them are proprietary and many are open source. ISPConfig is one of the most widely used open source web hosting control panel for Linux that is designed to manage Apache, FTP, DNS, Emails and databases using its web based interface. The ISPConfig provides different levels of user access that is administrator, re seller, client and email-user.

Now we will setup its installation on CentOS 7, after following this tutorial you will have a user friendly web hosting control panel where you can easily manage your multiple domains without any cost.

Basic OS Setup

As we will be going to setup ISPConfig on CentOS 7, so before starting with installation process we will configure its basic parameters to configure its network settings, firewall rules and installation of its required dependencies.

Network Setup

Your Linux host should be configured with a proper FQDN and IP address and must have internet access to it. You can configure your local host by opening the hosts file of your system using below command.

# vim /etc/hosts

72.25.10.73 ispcp ispcp.linoxide.com

Configure Firewall

Enabling system level firewall is always been a good practice for securing your servers. In Linux CentOS 7 you can enable your firewall and open the required known ports using the below commands.

To enable and start firewall run below command.

# systemctl enable firewalld
# systemctl start firewalld

Then open the ports that will be used in ISPConfig setup using the below command.

# firewall-cmd --zone=public --add-port 22/tcp --permanent
# firewall-cmd --zone=public --add-port 443/tcp --permanent
# firewall-cmd --zone=public --add-port 80/tcp --permanent
# firewall-cmd --zone=public --add-port 8080/tcp --permanent
# firewall-zmd --zone=public --add-port 25/tcp --permanent

Setup Dependencies

Before we move forward let's update your system with latest updates and security patches and Enable the EPEL repository on our CentOS system to the require packages packages for the ISPConfig.

# yum -y install yum-priorities

To update existing packages on the system run the below command.

# yum update

Once your system is up to date, we will install the Development tools packages that will be required for the complete setup of ISPConfig. To install these packages you can run the below command.

# yum -y groupinstall 'Development Tools'

1) Installing LAMP Stack

Now run the below command to install LAMP stack packages with MariaDB, Apache, PHP , NTP and PHPMYADMIN.

# yum install ntp httpd mod_ssl mariadb-server php php-mysql php-mbstring phpmyadmin

After LAMP stack packages installation, restart mariadb services and setup its root password using below 'mysql_secure_installation'.

# systemctl start mariadb
# systemctl enable mariadb

# mysql_secure_installation

2) Installing Dovecot

You can install dovecot by issuing the following command.

# yum -y install dovecot dovecot-mysql dovecot-pigeonhole

After installation create an empty dovecot-sql.conf file and create a symbolic as shown below.

# touch /etc/dovecot/dovecot-sql.conf
# ln -s /etc/dovecot/dovecot-sql.conf /etc/dovecot-sql.conf

Now restart dovecot services and enable it at boot.

# systemctl start dovecot
# systemctl enable dovecot

3) Installing ClamAV, Amavisd-new and SpamAssassin

To install ClamAV, Amavisd and SpamAssassin, you make use of the following command, that will install all these packages in one go.

# yum -y install amavisd-new spamassassin clamav clamd clamav-update unzip bzip2 unrar perl-DBD-mysql

4) Installing Apache2 and PHP Modules

Now will install some of the mentioned modules that ISPConfig 3 allows to use mod_php, mod_fcgi/PHP5, cgi/PHP5, and suPHP on each website basis.

So, to install these modules with Apache2 you can run the below command in your ssh terminal.

# yum -y install php-ldap php-mysql php-odbc php-pear php php-devel php-gd php-imap php-xml php-xmlrpc php-pecl-apc php-mbstring php-mcrypt php-mssql php-snmp php-soap php-tidy curl curl-devel mod_fcgid php-cli httpd-devel php-fpm perl-libwww-perl ImageMagick libxml2 libxml2-devel python-devel

To configure your date and time format, we will open the default configuration file of PHP and configure the Data and Time zone.

# vim /etc/php.ini
date.timezone = Europe/London

After making changes in the configuration file make sure to restart Apache web services.

5) Installing PureFTPd

PureFTP is required for the transfer of files form one server to other, to install its package you can use the below command.

yum -y install pure-ftpd

6) Installing BIND

BIND is Domain name server utility in Linux, in ISPconfig to manage and configure DNS setting, you have to install these package using the commands shown below.

# yum -y install bind bind-utils

ISPConfig Installation Setup

Now get ready for the installation setup of ISPConfig. To download its installation package we will use following wget command to copy the package from the officially provided web link of ISPConfig.

# wget http://www.ispconfig.org/downloads/ISPConfig-3-stable.tar.gz

Download ISPConfig

Once the package is downloaded, run the below command to unpack the package.

# tar -zxvf ISPConfig-3-stable.tar.gz

Then change the directory where its installation package is placed as shown in below image.

ISPConfig Package

Installing ISPConfig

Now we will run the installation through the php program by running the following command in the terminal.

# php -q install.php

ispconfig installer

Initial configuration

Select language (en,de) [en]:

Installation mode (standard,expert) [standard]:

Full qualified hostname (FQDN) of the server, eg server1.domain.tld [ispcp]: ispcp.linoxide.com

Database Configurations

MySQL server hostname [localhost]:

MySQL root username [root]:

MySQL root password []: *******

MySQL database to create [dbispconfig]:

MySQL charset [utf8]:

Then the system will be Generating a 4096 bit RSA private key to write a new private key to 'smtpd.key' file. After that we have to enter information that will be incorporated into certificate request.

Country Name (2 letter code) [XX]:UK
State or Province Name (full name) []:London
Locality Name (eg, city) [Default City]:Manchester
Organization Name (eg, company) [Default Company Ltd]:Linoxide
Organizational Unit Name (eg, section) []:Linux
Common Name (eg, your name or your server's hostname) []:ispcp
Email Address []:demo@linoxide.com

When you add above information, the system will be configured with all of its required packages as shown in the below image and then you will be asked for a secure (SSL) connection to the ISPConfig web interface.

ispconfig ssl setup

Once you have entered the information for generating the RSA key to establish its SSL connection you will be asked to configure some extra attributes, whether to choose the default or change as per your requirements. Then it will be writing RSA key, configure DB server and restart its services to complete the ISPConfig installation setup.

ispconfig setup

ISPConfig Login

Now we are ready to use ISPConfig control panel, to access its web control panel open your web browser to access the following URL that consists of your FQDN or Server's IP address with the default configured port.

https://server_IP:8080/

You can login with the dafault user name and password as 'admin' 'admin'.

ISPConfig Login

Using ISPConfig Control Panel

Upon successful authentication and providing with right login credentials, you will be directed towards dashboard of ISPconfig as shown below.

ISPConfig dashboard

By using this admin control panel we will be able to manage our system services, configure emails, add DNS entries and setup our new websites by simply choosing from its available modules.

In the following image we can see that choosing the System module will shows the status of our server with all services running on it.

using ispconfig

Conclusion

After completing this tutorial, you are now able to manage the services through a web browser that includes Apache web server, Postfix mail server, MySQL, BIND nameserver, PureFTPd, SpamAssassin, ClamAV, Mailman, and many more without paying its license fee as it free and open source that you can easily modify its source code if you wish to do so. Hope you find this tutorial much helpful for you, please leave your comments if you have any issue regarding this article and feel free to post your suggestions.

The post How to Install and Configure ISPConfig CP on CentOS 7.x appeared first on LinOxide.

How to Install and Configure OpenVPN in FreeBSD 10.2

$
0
0

VPN or Virtual Private Network is a private network across the public network - mean internet. VPN provide a secure network connection over the internet or a private network owned by service provider. VPN is one of the smartest solution for improving your online "PRIVACY", using some security protocol such as IPSec(Internet Protocol Security), SSL/TLS(Transport Layer Security), PPTP(Point-to-Point Tunneling Protocol), or even you can use SSH(Secure Shell) to secure remote connection, usually called port forwarding - but we do not recommend.

OpenVPN is an open-source project provide a secure connection with virtual private network implemented. It is flexible, reliable and secure. Openvpn use openssl library to provide the secure encryption, and can run under UDP and TCP protocol with IPv4 and IPv6 support. Designed to work with TUN/TAP virtual network interface that available on the most platform. Openvpn provide many ways for users in it's use, you can use a username/password based, certificate-based for authentication.

In this tutorial we will try to install "OpenVPN in FreeBSD 10.2 with certificate-based authentication", so if someone has the certificate, they can use the Our VPN.

Prerequisites

  • FreeBSD 10.2
  • Root privileges

Step 1 - Update the System

Before you begin the installation, make sure your system is up to date. Please use "freebsd-update" to update :

freebsd-update fetch
freebsd-update install

Step 2 - Install OpenVPN

You can install open vpn via freebsd ports in directory "/usr/ports/openvpn/" or you can install with binary packages method - with "pkg" command. In this tutorial I use a pkg command. Let`s install with following command :

pkg install openvpn

The command will install "easy-rsa" and "lzo2" packages that needed by openvpn.

Install OpenVPN in FreeBSD

Step 3 - Generate Server Certificate and Keys

We need a "easy-rsa" packages for generating the server key and certificate, and that is installed on our freebsd.

So now please make new directory for openvpn and our key :

mkdir -p /usr/local/etc/openvpn/

Next, copy the easy-rsa directory in "/usr/local/share/" to the openvpn directory :

cp -R /usr/local/share/easy-rsa /usr/local/etc/openvpn/easy-rsa/

Go to the openvpn easy-rsa directory, and then make all file there excutable with "chmod" command.

cd /usr/local/etc/openvpn/easy-rsa/
chmod +x *

You must generate encryption certificate in easy-rsa directory :

. ./vars
NOTE: If you run ./clean-all, I will be doing a rm -rf on /usr/local/etc/openvpn/easy-rsa/keys

./clean-all

Next, we want to generate 4 key and certificate :

  1. CA(Certificate Authority) key
  2. Server key and certificate
  3. Client key and Certificate
  4. DIFFIE-HELLMAN PARAMETERS(necessary for the server end of a SSL/TLS connection)

Generate ca.key

In the easy-rsa directory, please run command above :

./build-ca

Enter your information about the state, country, email etc. You can use a default by press "Enter". That command will generate a ca.key and ca.crt in "keys/" directory.

Generate CA Key for Openvpn

Generate server key and certificate

Generate server key with "build-key-server nameofserverkey", and we use "server" as our server name.

./build-key-server server

Enter your information about the state, country, email etc. You can use a default by press "Enter". And type "y" to confirm all info.

Generate Server Key

Generate the client key and certificate

Generate the client key and certificate with "build-key nameofclientkey" command in easy-rsa directory. in this tutorial wi will use "client" for our cliant name.

./build-key client

Enter your information about the state, country, email etc. You can use a default by press "Enter". And type "y" to confirm all info.

Generate Client Key

Generate dh parameters

Default key size in freebsd 10.2 for dh parameters is 2048-bit keys. It is a strong, although you can also make more secure and strong by using 4096-bit keys, but it make a slow the handshake process.

./build-dh
Generating DH parameters, 2048 bit long safe prime, generator 2 This is going to take a long time

And now all certificate is created under keys directory - "/usr/local/etc/easy-rsa/keys/". And the last you need to copy keys directory to openvpn.

cp -R keys ../../

cd ..
ll

total 40
drwxr-xr-x 4 root wheel 512 Sep 21 00:57 easy-rsa
drwx------ 2 root wheel 512 Sep 21 00:59 keys

Step 4 - Configure OpenVPN

In this step we will configure the openvpn with all key and certificate we have created before. We need to copy the openvpn configuration file from directory "/usr/local/share/examples/openvpn/sample-config-files/" to our openvpn directory "/usr/local/etc/openvpn/".

cp /usr/local/share/examples/openvpn/sample-config-files/server.conf/usr/local/etc/openvpn/server.conf
cd /usr/local/etc/openvpn/

Next, edit "server.conf" file with nano, if you haven't it, please install it with command :

pkg install nano

Now edit the file :

nano -c server.conf

Note : -c for show line number in nano editor.

In the line 32, you need to configure the port that used by openvpn. I will use default port :

port 1194

I'm UDP protocol, it is default configuration, line 36 :

proto UDP

Next, go to the line 78 to configure the certificate authority(CA), Server key, Client key and dh parameter.

ca /usr/local/etc/openvpn/keys/ca.crt
cert /usr/local/etc/openvpn/keys/server.crt
key /usr/local/etc/openvpn/keys/server.key #our server key
dh /usr/local/etc/openvpn/keys/dh2048.pem

And please configure the private ip that using by openvpn and the client in that network, please go to the line 101. I will leave default ip.

server 10.8.0.0 255.255.255.0

The last configure the log file in the line 280. we will that log file in "/var/log/openvpn/" directory.

status /var/log/openvpn/openvpn-status.log

and in the line 289 :

log /var/log/openvpn/openvpn.log

Save and Exit. And now please create the file for store the log :

mkdir -p /var/log/openvpn/
touch /var/log/openvpn/{openvpn, openvpn-status}.log

Step 5 - Enable Port Forwarding and Add OpenVPN to the Startup

To enable port forwrding in freebsd you can use sysctl command :

sysctl net.inet.ip.forwarding=1

Add the openvpn to the boot time by editing "rc.conf" file :

nano rc.conf

add to the end of the line below :

gateway_enable="YES"
openvpn_enable="YES"
openvpn_configfile="/usr/local/etc/openvpn/server.conf"
openvpn_if="tap"

Save and Exit.

Step 6 - Start OpenVPN

start openvpn wit service command:

service openvpn start

And check that openvpn is running by checking the port that used by openvpn :

sockstat -4 -l

You can see that port 1194 is opening and used by openvpn.

Step 7 - Configure the Client

As the client, please download the certificate file :

  • ca.crt
  • client.crt
  • client.key

Copy that three file to the home directory, and change the permission to the user taht use to login with ssh :

cd /usr/local/etc/openvpn/keys/
cp ca.crt client.crt client.key /home/myuser/
cd /home/myuser/
chown myuser:myuser ca.crt client.crt client.key

And then Download that's cetificate to your client, I'm here use linux so i just need to download it with scp command :

scp myuser@192.168.1.100:~/ca.crt myvpn/
scp myuser@192.168.1.100:~/client.crt myvpn/
scp myuser@192.168.1.100:~/client.key myvpn/

Please create client file configuration :

nano client.ovpn

Please add the code below :

client
dev tun
proto udp
remote 192.168.1.100 1194 #ServerIP and Port used by openvpn
resolv-retry infinite
nobind
user nobody
persist-key
persist-tun
mute-replay-warnings
ca ca.crt
cert client.crt
key client.key
ns-cert-type server
comp-lzo
verb 3

Save and Exit.

Now you see the files that belong to the client :

ll

total 20K
-rw-r--r--. 1 myuser myuser 1.8K Sep 21 03:09 ca.crt
-rw-r--r--. 1 myuser myuser 5.4K Sep 21 03:09 client.crt
-rw-------. 1 myuser myuser 1.7K Sep 21 03:09 client.key
-rw-rw-r--. 1 myuser myuser 213 Sep 20 00:13 client.ovpn

Step 8 - Testing OpenVPN

This is time test the openvpn, please connect to the openvpn server with openvpn file that we have. And connect with command :

cd myopenvpn/
sudo openvpn --config client.ovpn

And we have connected with the vpn, and we have private ip : 10.8.0.6.

Connected to OpenVPN 1

Openvpn Successfully.

Another test :

ping private ip for the client from the freebsd server :

ping 10.8.0.6

and from the client, I connect to the freebsd server with private ip that running openvpn 10.8.0.1.

ssh myuser@10.8.0.1

Connected to OpenVPN 2

And all successfully, we are connected.

Conclusion

VPN or Virtual Private Network is a secure and private network in public network(Internet). Openvpn is open-source project that implement virtual private network technology, Openvpn secure your traffic and encrypt it use OpenSSL Libraries. OpenVPN is easy to deploy and install in your own server, this is one of the best solution if you want to protect your online "PRIVACY".

The post How to Install and Configure OpenVPN in FreeBSD 10.2 appeared first on LinOxide.

How to Install OwnCloud 8 with Nginx and SSL on FreeBSD 10.2

$
0
0

OwnCloud is suite of application client-server for creating hosting services, it is allow you to create your own cloud storage and allow you to share your data, contacts, calendar with other users and devices. OwnCloud is open source project that provides an easy way for you to sync and share your data that is hosted in your data center. it Has a beautiful and user-friendly front-end design, so that make a user easy for browse and access the data, then share to others users. OwnCloud an online secure enterprise file sync and file sharing.

In this tutorial, I will guide you a step by step to install owncloud 8, and we use Nginx(engine-X) as web server, php-fpm and mariaDB as the database system on FreeBSD 10.2.

Step 1 ) Installing Nginx php-fpm and MariaDB

In the previous tutorial, we had discussed about the installation FEMP (Nginx, MariaDB and PHP-FPM), but in this tutorial we will discuss briefly. We will install FEMP using pkg command.

Install Nginx :

pkg install nginx

Install MariaDB :

pkg install mariadb100-server-10.0.21 mariadb100-client-10.0.21

Install php-fpm and all the packages that needed by owncloud :

pkg install php56-extensions php56-mysql php56-pdo_mysql php56-zlib php56-openssl php56-bcmath php56-gmp php56-gd php56-curl php56-ldap php56-exif php56-fileinfo php56-mbstring php56-gmp php56-bz2 php56-zip php56-mcrypt pecl-APCu pecl-intl

Step 2 ) Configure Nginx php-fpm and MariaDB

Configure Nginx.

Leave nginx with default configuration. In this step you just need to add nginx to the startup with sysrc command, then start nginx :

sysrc nginx_enable=yes
service nginx start

And try access it with your browser.

Nginx homepage

Configure MariaDB

Copy mariadb configuration and add it to the startup.

cp /usr/local/share/mysql/my-medium.cnf /usr/local/etc/my.cnf
sysrc mysql_enable=yes

Start mariadb :

service mysql-server start

Next, Configure a password for mariadb :

 mysql_secure_installation

Enter current password for root (enter for none):
#Just press Enter here
Change the root password? [Y/n] Y
#Type your password for mariadb here
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y

and try log in to the mariadb/mysql server with command :

mysql -u root -p
YOUR PASSWORD

MariaDB Configured

Configure PHP-FPM

Change the default listen to the unix socket and set the permission for it, and then configure php-fpm to running under user "www".

nano /usr/local/etc/php-fpm.conf

change the line like below :

listen = /var/run/php-fpm.sock
...
...
listen.owner = www
listen.group = www
listen.mode = 0660

And now configure and  edit php.ini file :

cd /usr/local/etc/
cp php.ini-production php.ini
nano php.ini

change the cgi.fix_pathinfo line value to 0.

cgi.fix_pathinfo=0

and the last add php-fpm to the boot time and start it :

sysrc php_fpm_enable=yes
service php-fpm start

Step 3 ) Generate SSL Certificate for OwnCloud

Create new directory "cert" in the /usr/local/etc/nginx/ and please generate SSL certificate :

mkdir -p /usr/local/etc/nginx/cert/
cd /usr/local/etc/nginx/cert/
openssl req -new -x509 -days 365 -nodes -out /usr/local/etc/nginx/cert/owncloud.crt -keyout /usr/local/etc/nginx/cert/owncloud.key

Next, change the certificate permission to 600 :

chmod 600 *

Step 4 ) Create Database for OwnCloud

To create the database for owncloud, you must log in to the mysql/mariadb server use the password that had been set.

mysql -u root -p
YOUR PASSWORD

Create new database called "my_ownclouddb" :

create database my_ownclouddb;

And Create new user "myownclouduser" for the "my_ownclouddb" database :

create user myownclouduser@localhost identified by 'myownclouduser';

Next, Grant that user has been created to the "my_ownclouddb" database :

grant all privileges on my_ownclouddb.* to myownclouduser@localhost identified by 'myownclouduser';
flush privileges;

Configure Database for OwnCloud

Step 5 ) Install and Configure OwnCloud

Go to the tmp directory and download owncloud from the official site with fetch command. I'm here use OwnCloud 8.1.3 - latest stable version.

cd /tmp/
fetch https://download.owncloud.org/community/owncloud-8.1.3.tar.bz2

Extract the owncloud and move owncloud directory to "/usr/local/www/".

tar -xzvf owncloud-8.1.3.tar.bz2
mv owncloud/ /usr/local/www/

Now create new directory "data" in the owncloud directory, and change the ownership of the file and directory to the "www" user that running nginx.

cd /usr/local/www/
mkdir -p /usr/local/www/owncloud/data
chown -R www:www owncloud/

Next, Configure the virtualhost for owncloud.

Move to the nginx configuration directory, and rename default nginx configuration file nginx.conf to nginx.conf.original.

cd /usr/local/etc/nginx/
mv nginx.cong nginx.conf.original

And create new configuration file for owncloud :

nano nginx.conf

Paste the following code :

worker_processes 2;

events {
worker_connections  1024;
}

http {
include      mime.types;
default_type  application/octet-stream;
sendfile        off;
keepalive_timeout  65;
gzip off;

server {
listen 80;
server_name 192.168.1.114;

#Force to the https
return 301 https://$server_name$request_uri;
}

server {

listen 443 ssl;
server_name 192.168.1.114; #YourIP or domain

#SSL Certificate you created
ssl_certificate /usr/local/etc/nginx/cert/owncloud.crt;
ssl_certificate_key /usr/local/etc/nginx/cert/owncloud.key;

# Add headers to serve security related headers
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Robots-Tag none;

root /usr/local/www/owncloud;
location = /robots.txt { allow all; access_log off; log_not_found off; }
location = /favicon.ico { access_log off; log_not_found off; }
location ^~ / {
index index.php;
try_files $uri $uri/ /index.php$is_args$args;
fastcgi_intercept_errors on;
error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php;
client_max_body_size 512M;
fastcgi_buffers 64 4K;
location ~ ^/(?:\.|data|config|db_structure\.xml|README) {
deny all;
}
location ~ \.php(?:$|/) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_pass unix:/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param HTTPS on;
include fastcgi_params;
fastcgi_param modHeadersAvailable true;
fastcgi_param MOD_X_ACCEL_REDIRECT_ENABLED on;
}
location ~* \.(?:jpg|gif|ico|png|css|js|svg)$ {
expires 30d;
add_header Cache-Control public;
access_log off;
}
location ^~ /data {
internal;
alias /mnt/files;
}
}
}
}

Save and Exit.

Next, test nginx configuration with command "nginx -t", if there is no error, please restart nginx and php-fpm :

nginx -t
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful

service php-fpm restart
service nginx restart

Visit the server IP or the domain name with your browser :

http://owncloud.local - will force to the https, so please confirm the ssl certificate.

And create admin user for owncloud, fill with your username and password. Then fill the database configuration, and fill it with a database that has been configured in the previous step.

Configure Owncloud

And click "Finish Setup".

Step 6 ) Testing

Visit the http://192.168.1.114/, then login with username and password that have been configured.

Owncloud installed

So we have successfully install and configure owncloud 8.1.3 on FreeBSD 10.2 with SSL and Nginx webserver.

Conclusion

OwnCloud an open-source project that makes it easy for users to store and exchange data on cloud computing. We can install ownCloud on our servers, so that we ourselves can organize the data that has been stored on our server easily and safely. This is a good solution for the convenience and security of data (Because the data is in our own) for its users. OwnCloud easily installed and configured on the server.

The post How to Install OwnCloud 8 with Nginx and SSL on FreeBSD 10.2 appeared first on LinOxide.

How Setup VMs using Gnome-Boxes

$
0
0

Gnome Boxes is a simple virtualisation software whose purpose is to provide an easy graphical user interface(GUI) to manage virtual machines on Linux. Using Boxes, we can access and use both local and remote virtual systems.  It is an alternative to tools like VMware, VirtualBox and Virt-manager.  However, it is targeted for basic users rather than system administrators who use advanced features.  As Gnome boxes is part of the GNOME environment, it is available on almost all Linux distributions.  Underneath, Boxes makes use of QEMU which in turn needs hardware virtualisation extensions (Intel VT-x or AMD-v). You can enable hardware virtualisation extension on your system by going to the BIOS settings. If your system does not support this, then you will not be able to use Boxes.  In this article, let us learn how to set up virtual machines using Gnome Boxes.

Installing Gnome-Boxes

If you are a Ubuntu user, execute

sudo apt-get install gnome-boxes

poornima@poornima-Lenovo:~$ sudo apt-get install gnome-boxes
[sudo] password for poornima:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
augeas-lenses cpu-checker dmsetup ebtables gawk ipxe-qemu libaugeas0

.....

16 upgraded, 47 newly installed, 0 to remove and 1247 not upgraded.
Need to get 15.0 MB/15.4 MB of archives.
After this operation, 60.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

RedHat users can go with dnf:

sudo dnf gnome-boxes

Once installed, you can start the boxes by executing the command

$gnome-boxes

Setting up virtual machine

When you start gnome-boxes for the first time, there are no boxes yet and you can proceed to create one by clicking on the 'New' button.

Freshly started gnome-box

Creating a box

Setting up a virtual machine using Gnome-boxes is pretty simple. We need to first have the required .iso file downloaded or have a link to the URL where it is available.

Selecting the source file

In the screenshot above, I have chosen the URL option, hence it is asking for entering the URL. Or else, you can browse for the location of .iso file and proceed from there. Gnome-box will now make preparations to create a new box by downloading the media (Fedora 22 Live Workstation in this case).

It will assign a default value of 1GB to memory and 21.5 GB of hard disk space to the virtual machine to be created. This can be customised if required.

VM configuration review

 

Once the Fedora screen shows up, select 'Install to Hard Drive' option and you will be taken through the usual installation screens of Fedora to select the language, date & time, location etc. Once selected, a summary of the installation to be done is displayed and you can proceed by pressing the 'Begin Installation' button.

Summary of installation to be done

It takes a while for the installation to complete and when done,  voila!!!  you can reboot the VM and start using it.

Screen showing that installation is complete

Below is the screen shot of Fedora 22 VM booted after installation and ready to be used.

Fedora is now up and running

Now, if you want to go back to the gnome-boxes main screen, click on the '<' button on the left side of the top bar.  It takes you to the screen which lists all the VMs that are installed.  You can launch any VM from here.

Gnome-boxes main screen showing the list of VMs

In order to edit the properties of a particular VM, select the required VM and click on the 'Properties' button that gets displayed.

VM properties

Do not expect too many features of the VM to be able to control from here as Boxes is meant for basic users. The "System" tab allows you to control the memory and disk space. But you cannot control resources like CPU.  "Devices"  tab helps in accessing the devices connected to the host OS. You can create snapshots of the VMs using the "Snapshots" tab.

Conclusion

Latest version of Gnome Boxes available is 3.16.2  and there are improvements to the software with every update.  Boxes is not a replacement for VMWare or Virtual box which are more mature.  You can give it a try only if you are looking for a safe and easy way to try out a new operating system.

The post How Setup VMs using Gnome-Boxes appeared first on LinOxide.

How to Setup DockerUI - a Web Interface for Docker

$
0
0

Docker is getting more popularity day by day. The idea of running a complete Operating System inside a container rather than running inside a virtual machine is an awesome technology. Docker has made lives of millions of system administrators and developers pretty easy for getting their work done in no time. It is an open source technology that provides an open platform to pack, ship, share and run any application as a lightweight container without caring on which operating system we are running on the host. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. Running docker containers and managing them may come a bit difficult and time consuming, so there is a web based application named DockerUI which is make managing and running container pretty simple. DockerUI is highly beneficial to people who are not much aware of linux command lines and want to run containerized applications. DockerUI is an open source web based application best known for its beautiful design and ease simple interface for running and managing docker containers.

Here are some easy steps on how we can setup Docker Engine with DockerUI in our linux machine.

1. Installing Docker Engine

First of all, we'll gonna install docker engine in our linux machine. Thanks to its developers, docker is very easy to install in any major linux distribution. To install docker engine, we'll need to run the following command with respect to which distribution we are running.

On Ubuntu/Fedora/CentOS/RHEL/Debian

Docker maintainers have written an awesome script that can be used to install docker engine in Ubuntu 15.04/14.10/14.04, CentOS 6.x/7, Fedora 22, RHEL 7 and Debian 8.x distributions of linux. This script recognizes the distribution of linux installed in our machine, then adds the required repository to the filesystem, updates the local repository index and finally installs docker engine and required dependencies from it. To install docker engine using that script, we'll need to run the following command under root or sudo mode.

# curl -sSL https://get.docker.com/ | sh

On OpenSuse/SUSE Linux Enterprise

To install docker engine in the machine running OpenSuse 13.1/13.2 or SUSE Linux Enterprise Server 12, we'll simply need to execute the zypper command. We'll gonna install docker using zypper command as the latest docker engine is available on the official repository. To do so, we'll run the following command under root/sudo mode.

# zypper in docker

On ArchLinux

Docker is available in the official repository of Archlinux as well as in the AUR packages maintained by the community. So, we have two options to install docker in archlinux. To install docker using the official arch repository, we'll need to run the following pacman command.

# pacman -S docker

But if we want to install docker from the Archlinux User Repository ie AUR, then we'll need to execute the following command.

# yaourt -S docker-git

2. Starting Docker Daemon

After docker is installed, we'll now gonna start our docker daemon so that we can run docker containers and manage them. We'll run the following command to make sure that docker daemon is installed and to start the docker daemon.

On SysVinit

# service docker start

On Systemd

# systemctl start docker

3. Installing DockerUI

Installing DockerUI is pretty easy than installing docker engine. We just need to pull the dockerui from the Docker Registry Hub and run it inside a container. To do so, we'll simply need to run the following command.

# docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui

Starting DockerUI Container

Here, in the above command, as the default port of the dockerui web application server 9000, we'll simply map the default port of it with -p flag. With -v flag, we specify the docker socket. The --privileged flag is required for hosts using SELinux.

After executing the above command, we'll now check if the dockerui container is running or not by running the following command.

# docker ps

Running Docker Containers

4. Pulling an Image

Currently, we cannot pull an image directly from DockerUI so, we'll need to pull a docker image from the linux console/terminal. To do so, we'll need to run the following command.

# docker pull ubuntu

Docker Image Pull

The above command will pull an image tagged as ubuntu from the official Docker Hub. Similarly, we can pull more images that we require and are available in the hub.

4. Managing with DockerUI

After we have started the dockerui container, we'll now have fun with it to start, pause, stop, remove and perform many possible activities featured by dockerui with docker containers and images. First of all, we'll need to open the web application using our web browser. To do so, we'll need to point our browser to http://ip-address:9000 or http://mydomain.com:9000 according to the configuration of our system. By default, there is no login authentication needed for the user access but we can configure our web server for adding authentication. To start a container, first we'll need to have images of the required application we want to run a container with.

Create a Container

To create a container, we'll need to go to the section named Images then, we'll need to click on the image id which we want to create a container of. After clicking on the required image id, we'll need to click on Create button then we'll be asked to enter the required properties for our container. And after everything is set and done. We'll need to click on Create button to finally create a container.

Creating Docker Container

Stop a Container

To stop a container, we'll need to move towards the Containers page and then select the required container we want to stop. Now, we'll want to click on Stop option which we can see under Actions drop-down menu.

Managing Container

Pause and Resume

To pause a container, we simply select the required container we want to pause by keeping a check mark on the container and then click the Pause option under Actions . This is will pause the running container and then, we can simply resume the container by selecting Unpause option from the Actions drop down menu.

Kill and Remove

Like we had performed the above tasks, its pretty easy to kill and remove a container or an image. We just need to check/select the required container or image and then select the Kill or Remove button from the application according to our need.

Conclusion

DockerUI is a beautiful utilization of Docker Remote API to develop an awesome web interface for managing docker containers. The developers have designed and developed this application in pure HTML and JS language. It is currently incomplete and is under heavy development so we don't recommend it for the use in production currently. It makes users pretty easy to manage their containers and images with simple clicks without needing to execute lines of commands to do small jobs. If we want to contribute DockerUI, we can simply visit its Github Repository. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you !

The post How to Setup DockerUI - a Web Interface for Docker appeared first on LinOxide.

How to Setup Red Hat Ceph Storage on CentOS 7.0

$
0
0

Ceph is an open source software platform that stores data on a single distributed computer cluster. When you are planning to build a cloud, then on top of the requirements you have to decide on how to implement your storage. Open Source CEPH is one of RED HAT mature technology based on object-store system, called RADOS, with a set of gateway APIs that present the data in block, file, and object modes. As a result of its open source nature, this portable storage platform may be installed and used in public or private clouds. The topology of a Ceph cluster is designed around replication and information distribution, which are intrinsic and provide data integrity. It is designed to be fault-tolerant, and can run on commodity hardware, but can also be run on a number of more advanced systems with the right setup.

Ceph can be installed on any Linux distribution but it requires the recent kernel and other up-to-date libraries in order to be properly executed. But, here in this tutorial we will be using CentOS-7.0 with minimal installation packages on it.

System Resources

CEPH-STORAGE
OS: CentOS Linux 7 (Core)
RAM:1 GB
CPU:1 CPU
DISK: 20
Network: 45.79.136.163
FQDN: ceph-storage.linoxide.com

CEPH-NODE
OS: CentOS Linux 7 (Core)
RAM:1 GB
CPU:1 CPU
DISK: 20
Network: 45.79.171.138
FQDN: ceph-node.linoxide.com

Pre-Installation Setup

There are few steps that we need to perform on each of our node before the CEPH storage setup. So first thing is to make sure that each node is configured with its networking setup with FQDN that is reachable to other nodes.

Configure Hosts

To setup the hosts entry on each node let's open the default hosts configuration file as shown below.

# vi /etc/hosts

45.79.136.163 ceph-storage ceph-storage.linoxide.com
45.79.171.138 ceph-node ceph-node.linoxide.com

Install VMware Tools
While working on the VMware virtual environment, its recommended that you have installed its open VM tools. You can install using below command.

#yum install -y open-vm-tools

Firewall Setup

If you are working on a restrictive environment where your local firewall in enabled then make sure that the number of following ports are allowed from in your CEPH storge admin node and client nodes.

You must open ports 80, 2003, and 4505-4506 on your Admin Calamari node and port 80 to CEPH admin or Calamari node for inbound so that clients in your network can access the Calamari web user interface.

You can start and enable firewall in centos 7 with given below command.

#systemctl start firewalld
#systemctl enable firewalld

To allow the mentioned ports in the Admin Calamari node run the following commands.

#firewall-cmd --zone=public --add-port=80/tcp --permanent
#firewall-cmd --zone=public --add-port=2003/tcp --permanent
#firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent
#firewall-cmd --reload

On the CEPH Monitor nodes you have to allow the following ports in the firewall.

#firewall-cmd --zone=public --add-port=6789/tcp --permanent

Then allow the following list of default ports for talking to clients and monitors and for sending data to other OSDs.

#firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

It quite fair that you should disable firewall and SELinux settings if you are working in a non-production environment , so we are going to disable the firewall and SELinux in our test environment.

#systemctl stop firewalld
#systemctl disable firewalld

System Update

Now update your system and then give it a reboot to implement the required changes.

#yum update
#shutdown -r 0

Setup CEPH User

Now we will create a separate sudo user that will be used for installing the ceph-deploy utility on each node and allow that user to have password less access on each node because it needs to install software and configuration files without prompting for passwords on CEPH nodes.

To create new user with its separate home directory run the below command on the ceph-storage host.

[root@ceph-storage ~]# useradd -d /home/ceph -m ceph
[root@ceph-storage ~]# passwd ceph

Each user created on the nodes must have sudo rights, you can assign the sudo rights to the user using running the following command as shown.

[root@ceph-storage ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
ceph ALL = (root) NOPASSWD:ALL

[root@ceph-storage ~]# sudo chmod 0440 /etc/sudoers.d/ceph

Setup SSH-Key

Now we will generate SSH keys on the admin ceph node and then copy that key to each Ceph cluster nodes.

Let's run the following command on the ceph-node to copy its ssh key on the ceph-storage.

[root@ceph-node ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
5b:*:*:*:*:*:*:*:*:*:c9 root@ceph-node
The key's randomart image is:
+--[ RSA 2048]----+

[root@ceph-node ~]# ssh-copy-id ceph@ceph-storage

SSH key

Configure PID Count

To configure the PID count value, we will make use of the following commands to check the default kernel value. By default its a small maximum number of threads that is '32768'.
So will configure this value to a higher number of threads by editing the system conf file as shown in the image.

Change PID Value

Setup Your Administration Node Server

With all the networking setup and verified, now we will install ceph-deploy using the user ceph. So, check the hosts entry by opening its file.

#vim /etc/hosts
ceph-storage 45.79.136.163
ceph-node 45.79.171.138

Now to add its repository run the below command.

#rpm -Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm

Adding EPEL

OR create a new file and update the CEPH repository parameters but do not forget to mention your current release and distribution.

[root@ceph-storage ~]# vi /etc/yum.repos.d/ceph.repo

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

After this update your system and install the ceph deploy package.

Installing CEPH-Deploy Package

To upate the system with latest ceph repository and other packages, we will run the following command along with ceph-deploy installation command.

#yum update -y && yum install ceph-deploy -y

Image-

Setup the cluster

Create a new directory and move into it on the admin ceph-node to collect all output files and logs by using the following commands.

#mkdir ~/ceph-cluster
#cd ~/ceph-cluster

#ceph-deploy new storage

setup ceph cluster

Upon successful execution of above command you can see it creating its configuration files.
Now to configure the default configuration file of CEPH, open it using any editor and place the following two lines under its global parameters that reflects your public network.

#vim ceph.conf
osd pool default size = 1
public network = 45.79.0.0/16

Installing CEPH

We are now going to install CEPH on each of the node associated with our CEPH cluster. To do so we make use of the following command to install CEPH on our both nodes that is ceph-storage and ceph-node as shown below.

#ceph-deploy install ceph-node ceph-storage

installing ceph

This will takes some time while processing all its required repositories and installing the required packages.

Once the ceph installation process is complete on both nodes we will proceed to create monitor and gather keys by running the following command on the same node.

#ceph-deploy mon create-initial

CEPH Initial Monitor

Setup OSDs and OSD Daemons

Now we will setup disk storages, to do so first run the below command to List all of your usable disks by using the below command.

#ceph-deploy disk list ceph-storage

In results will get the list of your disks used on your storage nodes that you will use for creating the OSD. Let's run the following command that consists of your disks names as shown below.

#ceph-deploy disk zap storage:sda
#ceph-deploy disk zap storage:sdb

Now to finalize the OSD setup let's run the below commands to setup the journaling disk along with data disk.

#ceph-deploy osd prepare storage:sdb:/dev/sda
#ceph-deploy osd activate storage:/dev/sdb1:/dev/sda1

You will have to repeat the same command on all the nodes while it will clean everything present on the disk. Afterwards to have a functioning cluster, we need to copy the different keys and configuration files from the admin ceph-node to all the associated nodes by using the following command.

#ceph-deploy admin ceph-node ceph-storage

Testing CEPH

We have almost completed the CEPH cluster setup, let's run the below command to check the status of the running ceph by using the below command on the admin ceph-node.

#ceph status
#ceph health
HEALTH_OK

So, if you did not get any error message at ceph status , that means you have successfully setup your ceph storage cluster on CentOS 7.

Conclusion

In this detailed article we learned about the CEPH storage clustering setup using the two virtual Machines with CentOS 7 OS installed on them that can be used as a backup or as your local storage that can be used to precess other virtual machines by creating its pools. We hope you have got this article helpful. Do share your experiences when you try this at your end.

The post How to Setup Red Hat Ceph Storage on CentOS 7.0 appeared first on LinOxide.


Getting Started to Calico Virtual Private Networking on Docker

$
0
0

Calico is a free and open source software for virtual networking in data centers. It is a pure Layer 3 approach to highly scalable datacenter for cloud virtual networking. It seamlessly integrates with cloud orchestration system such as openstack, docker clusters in order to enable secure IP communication between virtual machines and containers. It implements a highly productive vRouter in each node that takes advantage of the existing Linux kernel forwarding engine. Calico works in such an awesome technology that it has the ability to peer directly with the data center’s physical fabric whether L2 or L3, without the NAT, tunnels on/off ramps, or overlays. Calico makes full utilization of docker to run its containers in the nodes which makes it multi-platform and very easy to ship, pack and deploy. Calico has the following salient features out of the box.

  • It can scale tens of thousands of servers and millions of workloads.
  • Calico is easy to deploy, operate and diagnose.
  • It is open source software licensed under Apache License version 2 and uses open standards.
  • It supports container, virtual machines and bare metal workloads.
  • It supports both IPv4 and IPv6 internet protocols.
  • It is designed internally to support rich, flexible and secure network policy.

In this tutorial, we'll perform a virtual private networking between two nodes running Calico in them with Docker Technology. Here are some easy steps on how we can do that.

1. Installing etcd

To get started with the calico virtual private networking, we'll need to have a linux machine running etcd. As CoreOS comes preinstalled and preconfigured with etcd, we can use CoreOS but if we want to configure Calico in other linux distributions, then we'll need to setup it in our machine. As we are running Ubuntu 14.04 LTS, we'll need to first install and configure etcd in our machine. To install etcd in our Ubuntu box, we'll need to add the official ppa repository of Calico by running the following command in the machine which we want to run etcd server. Here, we'll be installing etcd in our 1st node.

# apt-add-repository ppa:project-calico/icehouse

The primary source of Ubuntu packages for Project Calico based on OpenStack Icehouse, an open source solution for virtual networking in cloud data centers. Find out more at http://www.projectcalico.org/
More info: https://launchpad.net/~project-calico/+archive/ubuntu/icehouse
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmpi9zcmls1/secring.gpg' created
gpg: keyring `/tmp/tmpi9zcmls1/pubring.gpg' created
gpg: requesting key 3D40A6A7 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmpi9zcmls1/trustdb.gpg: trustdb created
gpg: key 3D40A6A7: public key "Launchpad PPA for Project Calico" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK

Then, we'll need to edit /etc/apt/preferences and make changes to prefer Calico-provided packages for Nova and Neutron.

# nano /etc/apt/preferences

We'll need to add the following lines into it.

Package: *
Pin: release o=LP-PPA-project-calico-*
Pin-Priority: 100

Calico PPA Config

Next, we'll also need to add the official BIRD PPA for Ubuntu 14.04 LTS so that bugs fixes are installed before its available on the Ubuntu repo.

# add-apt-repository ppa:cz.nic-labs/bird

The BIRD Internet Routing Daemon PPA (by upstream & .deb maintainer)
More info: https://launchpad.net/~cz.nic-labs/+archive/ubuntu/bird
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmphxqr5hjf/secring.gpg' created
gpg: keyring `/tmp/tmphxqr5hjf/pubring.gpg' created
gpg: requesting key F9C59A45 from hkp server keyserver.ubuntu.com
apt-ggpg: /tmp/tmphxqr5hjf/trustdb.gpg: trustdb created
gpg: key F9C59A45: public key "Launchpad Datov� schr�nky" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK

Now, after the PPA jobs are done, we'll now gonna update the local repository index and then install etcd in our machine.

# apt-get update

To install etcd in our ubuntu machine, we'll gonna run the following apt command.

# apt-get install etcd python-etcd

2. Starting Etcd

After the installation is complete, we'll now configure the etcd configuration file. Here, we'll edit /etc/init/etcd.conf using a text editor and append the line exec /usr/bin/etcd and make it look like below configuration.

# nano /etc/init/etcd.conf
exec /usr/bin/etcd --name="node1" \
--advertise-client-urls="http://10.130.65.71:2379,http://10.130.65.71:4001" \
--listen-client-urls="http://0.0.0.0:2379,http://0.0.0.0:4001" \
--listen-peer-urls "http://0.0.0.0:2380" \
--initial-advertise-peer-urls "http://10.130.65.71:2380" \
--initial-cluster-token $(uuidgen) \
--initial-cluster "node1=http://10.130.65.71:2380" \
--initial-cluster-state "new"
Configuring ETCD

Note: In the above configuration, we'll need to replace 10.130.65.71 and node-1 with the private ip address and hostname of your etcd server box. After done with editing, we'll need to save and exit the file.

We can get the private ip address of our etcd server by running the following command.

# ifconfig

ifconfig

 

As our etcd configuration is done, we'll now gonna start our etcd service in our Ubuntu node. To start etcd daemon, we'll gonna run the following command.

# service etcd start

After done, we'll have a check if etcd is really running or not. To ensure that, we'll need to run the following command.

# service etcd status

3. Installing Docker

Next, we'll gonna install Docker in both of our nodes running Ubuntu. To install the latest release of docker, we'll simply need to run the following command.

# curl -sSL https://get.docker.com/ | sh

Docker Engine Installation

After the installation is completed, we'll gonna start the docker daemon in-order to make sure that its running before we move towards Calico.

# service docker restart

docker stop/waiting
docker start/running, process 3056

3. Installing Calico

We'll now install calico in our linux machine in-order to run the calico containers. We'll need to install Calico in every node which we're wanting to connect into the Calico network. To install Calico, we'll need to run the following command under root or sudo permission.

On 1st Node

# wget https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl

--2015-09-28 12:08:59-- https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
Resolving github.com (github.com)... 192.30.252.129
Connecting to github.com (github.com)|192.30.252.129|:443... connected.
...
Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 54.231.9.9
Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|54.231.9.9|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6166661 (5.9M) [application/octet-stream]
Saving to: 'calicoctl'
100%[=========================================>] 6,166,661 1.47MB/s in 6.7s
2015-09-28 12:09:08 (898 KB/s) - 'calicoctl' saved [6166661/6166661]

# chmod +x calicoctl

After done with making it executable, we'll gonna make the binary calicoctl available as the command in any directory. To do so, we'll need to run the following command.

# mv calicoctl /usr/bin/

On 2nd Node

# wget https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl

--2015-09-28 12:09:03-- https://github.com/projectcalico/calico-docker/releases/download/v0.6.0/calicoctl
Resolving github.com (github.com)... 192.30.252.131
Connecting to github.com (github.com)|192.30.252.131|:443... connected.
...
Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 54.231.8.113
Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|54.231.8.113|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6166661 (5.9M) [application/octet-stream]
Saving to: 'calicoctl'
100%[=========================================>] 6,166,661 1.47MB/s in 5.9s
2015-09-28 12:09:11 (1022 KB/s) - 'calicoctl' saved [6166661/6166661]

# chmod +x calicoctl

After done with making it executable, we'll gonna make the binary calicoctl available as the command in any directory. To do so, we'll need to run the following command.

# mv calicoctl /usr/bin/

Likewise, we'll need to execute the above commands to install in every other nodes.

4. Starting Calico services

After we have installed calico on each of our nodes, we'll gonna start our Calico services. To start the calico services, we'll need to run the following commands.

On 1st Node

# calicoctl node

WARNING: Unable to detect the xt_set module. Load with `modprobe xt_set`
WARNING: Unable to detect the ipip module. Load with `modprobe ipip`
No IP provided. Using detected IP: 10.130.61.244
Pulling Docker image calico/node:v0.6.0
Calico node is running with id: fa0ca1f26683563fa71d2ccc81d62706e02fac4bbb08f562d45009c720c24a43

On 2nd Node

Next, we'll gonna export a global variable in order to connect our calico nodes to the same etcd server which is hosted in node1 in our case. To do so, we'll need to run the following command in each of our nodes.

# export ETCD_AUTHORITY=10.130.61.244:2379

Then, we'll gonna run calicoctl container in our every our second node.

# calicoctl node

WARNING: Unable to detect the xt_set module. Load with `modprobe xt_set`
WARNING: Unable to detect the ipip module. Load with `modprobe ipip`
No IP provided. Using detected IP: 10.130.61.245
Pulling Docker image calico/node:v0.6.0
Calico node is running with id: 70f79c746b28491277e28a8d002db4ab49f76a3e7d42e0aca8287a7178668de4

This command should be executed in every nodes in which we want to start our Calico services. The above command start a container in the respective node. To check if the container is running or not, we'll gonna run the following docker command.

# docker ps

Docker Running Containers

If we see the output something similar to the output shown below then we can confirm that Calico containers are up and running.

5. Starting Containers

Next, we'll need to start few containers in each of our nodes running Calico services. We'll assign a different name to each of the containers running ubuntu. Here, workload-A, workload-B, etc has been assigned as the unique name for each of the containers. To do so, we'll need to run the following command.

On 1st Node

# docker run --net=none --name workload-A -tid ubuntu

Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
...
91e54dfb1179: Already exists
library/ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:73fbe2308f5f5cb6e343425831b8ab44f10bbd77070ecdfbe4081daa4dbe3ed1
Status: Downloaded newer image for ubuntu:latest
a1ba9105955e9f5b32cbdad531cf6ecd9cab0647d5d3d8b33eca0093605b7a18

# docker run --net=none --name workload-B -tid ubuntu

89dd3d00f72ac681bddee4b31835c395f14eeb1467300f2b1b9fd3e704c28b7d

On 2nd Node

# docker run --net=none --name workload-C -tid ubuntu

Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
...
91e54dfb1179: Already exists
library/ubuntu:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:73fbe2308f5f5cb6e343425831b8ab44f10bbd77070ecdfbe4081daa4dbe3ed1
Status: Downloaded newer image for ubuntu:latest
24e2d5d7d6f3990b534b5643c0e483da5b4620a1ac2a5b921b2ba08ebf754746

# docker run --net=none --name workload-D -tid ubuntu

c6f28d1ab8f7ac1d9ccc48e6e4234972ed790205c9ca4538b506bec4dc533555

Similarly, if we have more nodes, we can run ubuntu docker container into it by running the above command with assigning a different container name.

6. Assigning IP addresses

After we have got our docker containers running in each of our hosts, we'll go for adding a networking support to the containers. Now, we'll gonna assign a new ip address to each of the containers using calicoctl. This will add a new network interface to the containers with the assigned ip addresses. To do so, we'll need to run the following commands in the hosts running the containers.

On 1st Node

# calicoctl container add workload-A 192.168.0.1
# calicoctl container add workload-B 192.168.0.2

On 2nd Node

# calicoctl container add workload-C 192.168.0.3
# calicoctl container add workload-D 192.168.0.4

7. Adding Policy Profiles

After our containers have got networking interfaces and ip address assigned, we'll now need to add policy profiles to enable networking between the containers each other. After adding the profiles, the containers will be able to communicate to each other only if they have the common profiles assigned. That means, if they have different profiles assigned, they won't be able to communicate to eachother. So, before being able to assign. we'll need to first create some new profiles. That can be done in either of the hosts. Here, we'll run the following command in 1st Node.

# calicoctl profile add A_C

Created profile A_C

# calicoctl profile add B_D

Created profile B_D

After the profile has been created, we'll simply add our workload to the required profile. Here, in this tutorial, we'll place workload A and workload C in a common profile A_C and workload B and D in a common profile B_D. To do so, we'll run the following command in our hosts.

On 1st Node

# calicoctl container workload-A profile append A_C
# calicoctl container workload-B profile append B_D

On 2nd Node

# calicoctl container workload-C profile append A_C
# calicoctl container workload-D profile append B_D

8. Testing the Network

After we've added a policy profile to each of our containers using Calicoctl, we'll now test whether our networking is working as expected or not. We'll take a node and a workload and try to communicate with the other containers running in same or different nodes. And due to the profile, we should be able to communicate only with the containers having a common profile. So, in this case, workload A should be able to communicate with only C and vice versa whereas workload A shouldn't be able to communicate with B or D. To test the network, we'll gonna ping the containers having common profiles from the 1st host running workload A and B.

We'll first ping workload-C  having ip 192.168.0.3 using workload-A as shown below.

# docker exec workload-A ping -c 4 192.168.0.3

Then, we'll ping workload-D having ip 192.168.0.4 using workload-B as shown below.

# docker exec workload-B ping -c 4 192.168.0.4

Ping Test Success

Now, we'll check if we're able to ping the containers having different profiles. We'll now ping workload-D having ip address 192.168.0.4 using workload-A.

# docker exec workload-A ping -c 4 192.168.0.4

After done, we'll try to ping workload-C having ip address 192.168.0.3 using workload-B.

# docker exec workload-B ping -c 4 192.168.0.3

Ping Test Failed

Hence, the workloads having same profiles could ping each other whereas having different profiles couldn't ping to each other.

Conclusion

Calico is an awesome project providing an easy way to configure a virtual network using the latest docker technology. It is considered as a great open source solution for virtual networking in cloud data centers. Calico is being experimented by people in different cloud platforms like AWS, DigitalOcean, GCE and more these days. As Calico is currently under experiment, its stable version hasn't been released yet and is still in pre-release. The project consists a well documented documentations, tutorials and manuals in their official documentation site.

The post Getting Started to Calico Virtual Private Networking on Docker appeared first on LinOxide.

How to Setup Installation of Kolab Groupware on CentOS 7.0

$
0
0

Kolab is a free, secure, scalable, reliable and open source groupware server with a web administration interface, management resources, synchronization for several devices and more. Different clients can have access to variety of it features that includes Kolab client for Mozilla, Outlook and KDE. The core features that can be availed using the Kolab groupware are Emails Solution, Calendering, Address Books and Task Management.

So, using the Kolab Groupware multiple functionality is provided for email server, spam and virus filtering and web interface that supports secure protocols such as imaps, https, smtps, https, etc. The web interface can be used to add, modify and remove users, domains, distributions list, shared folders, among other things.

1) System Preparation

Kolab installation process is very simple to follow, but we need to take care of few things before installing it on CentOS 7.0.

The base operating system we are using in this tutorial is CentOS-7.0 with minimal installation packages. Let's connect to your centos server with root user and configure your basic server settings by following few steps.

Network Setup

Configure your CentOS 7 server with the static IP address and a fully qualified domain name as it has strict DNS requirements for how this machine refers to itself, and how people will locate this machine.

You can check and setup your hostname using following commands respectively.

# hostname -f
# hostnamectl set-hostname cen-kolab
# vim /etc/hosts
72.25.10.73 cen-kolab cen-kolab.linoxide.com

Configure Firewall

If you are working on a critical environment then you must Enable SELinux and Firewalld on your CentOS 7 server, while its better if you could disable both in a test or non-production environment.

Acording to Kolab officials SELinux is not fully compatible with SELinux features so its quite recommended that you consider configuring SELinux to be permissive mode.

The SELinux can be checked and configured with below commands.

# sestatus
# setenforce 0

To enable and start firewall service in CentOS 7, run the following commands.

# systemctl enable firewalld
# systemctl start firewalld

In order to allow the required ports used by Kolab in CentOS-7 firewall, let's create a simple script with all required ports and services to be executed on the system.

# vim firewall_rules.sh

#!/bin/bash
for s in ssh http https pop3s imaps smtp ldap ldaps
do
firewall-cmd --permanent --add-service=$s
done
for p in 110/tcp 143/tcp 587/tcp
do
firewall-cmd --permanent --add-port=$p
done
firewall-cmd --reload

After saving changes execute this script and then run the below command to check if all the mentioned ports are allowed.

# iptables -L -n -v

Allowed Ports

2) Installing Kolab on CentOS 7

Now we will be installing Kolab by adding its latest EPEL repository for CentOS-7. Let's run the below command to install its latest EPEL repository.

Ading EPEL

# rpm -Uhv https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

Kolab EPEL

Downloading EPEL

Now run the below commands to download the newly added repositories on your centos 7 server.

# wget http://obs.kolabsys.com/repositories/Kolab:/3.4/CentOS_7/Kolab:3.4.repo
# wget http://obs.kolabsys.com/repositories/Kolab:/3.4:/Updates/CentOS_7/Kolab:3.4:Updates.repo

To add its GPG key , use the following command.

# rpm --import https://ssl.kolabsys.com/community.asc

Download Kolab Repository

Installing Yum Plugin

To install yum plugin priorities package, run the below command.

# yum install yum-plugin-priorities

yum plugin priorities

To proceed the installation process "Y" to proceed with the installation package.

==============================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================
Installing:
yum-plugin-priorities noarch 1.1.31-29.el7 base 24 k

Transaction Summary
==============================================================================================================================
Install 1 Package

Total download size: 24 k
Installed size: 28 k
Is this ok [y/d/N]: y

Install Kolab Groupware

Finally we reached at the point to install kolab groupware, let's run the command to start its installation on centos 7 with yum.

# yum install kolab

installing kolab

This will install kolab groupware with a number of packages including its various dependencies. To proceed with press "Y" to continue if you agree to install these packages as shown below.

Transaction Summary
==============================================================================================================================
Install 1 Package (+341 Dependent packages)
Upgrade ( 7 Dependent packages)

Total download size: 198 M
Is this ok [y/d/N]: y

During the installation process you will be asked to confirm the GPG Key before the installation of packages starts. Press "Y" to accept the changes and let the installation complete.

import GPG Key

3) Starting Services

After kolab groupware installation start the Apache web server, MariaDB and Postfix services and enable them to auto start at each reboot using the following commands.

For Apache

#systemctl start httpd.service
#systemctl enable httpd.service

For MariaDB

#systemctl start mariadb.service
#systemctl enable mariadb.service

For Postfix

#systemctl start postfix.service
#systemctl enable postfix.service

4) Configuring Kolab Groupware

Now we will start the Kolab setup process by using the below command. So first thing it will ask to configure is the FQDN, then it will ask for the password to configure that will be used later on.

Let's run the following command to start Kolab setup as shown.

# setup-kolab

Kolab Setup

Configure the hostname accordingly.

linoxide.com [Y/n]: n
Domain name to use: cen-kolab.linoxide.com

The standard root dn we composed for you follows. Please confirm this is the root
dn you wish to use.

dc=cen-kolab,dc=linoxide,dc=com [Y/n]: Y

Setup is now going to set up the 389 Directory Server. This may take a little
while (during which period there is no output and no progress indication).

Once the Kolab setup is complete , it better to reboot your server as a good practice and make sure that all services are up and running.

5) Kolab Web Login

Now we can login to the web admin using the URL and the credentials that you had configured during it setup.

Let's open the Kolab web admin page as shown below.

http://172.25.10.173/kolab-webadmin/

Kolab Web Admin

After providing successful credentials you will be greeted with Kolab Web Administration page, where you can manage users, resource and other objects.

Kolab Server Maintenance

Conclusion

We have successfully installed and configured Kolab on CentOs 7, which is one of the best groupware solution. Its email and calendar services are completely secure, so your private data will never be crawled and get ready your own Kolab server. Do let us know if you find any difficulty.

The post How to Setup Installation of Kolab Groupware on CentOS 7.0 appeared first on LinOxide.

Hunting XOR DDoS and other Malware with RKHunter on CentOS 7

$
0
0

Hello penguins, on this article we are going to learn to hunt rootkits with Rootkit Hunter, among other threats, you will be able to use it to find signs of some variants of the XOR.DDoS malware, that is currently being used to create botnets with Linux systems for massive distributed denial of service attacks.

Found XOR DDoS Rootkit

Table of Contents

  • Install
    • Download
    • Patch - (optional )
    • Install
  • Configure
    • tests
    • logs
    • whilelists
    • misc
  • Run
    • flags
    • cron scheduling

Install

Download Rkhunter, try cURL to do this.

curl http://nbtelecom.dl.sourceforge.net/project/rkhunter/rkhunter/1.4.2/rkhunter-1.4.2.tar.gz -o rkhunter-1.4.2.tar.gz

Then extract the contents of the package.

tar zxvf rkhunter-1.4.2.tar.gz

Enter tarball directory.

cd rkhunter-1.4.2

Patch  (Optional)

This step will against will patch the rkhunter script and its database to look for the XOR DDoS Linux malware. This patch is based on the port and files found on the reports made by Akamai, Avast and Malware Must Die.

Enter on the files directory under rkthunter directory.

cd files

Install the patch utility with yum.

yum install patch

Now download the patch.

curl http://sourceforge.net/p/rkhunter/patches/44/attachment/rkhunter.patch -o rkhunter.patch

Alternatively, you can copy and past the contents of the rkhunter.patch file from here.

--- rkhunter    2014-03-12 17:54:55.000000000 -0300
+++ rkhunter.new        2015-10-02 17:01:25.040000000 -0300
@@ -7797,6 +7797,19 @@
#

+       # XOR.DDoS
+       XORDDOS_FILES="/lib/udev/udev
+                      /lib/udev/debug
+                      /etc/cron.hourly/cron.sh
+                      /etc/cron.hourly/udev.sh
+                      /lib/libgcc4.so
+                      /var/run/udev.pid
+                      /var/run/sftp.pid"
+       XORDDOS_DIRS=
+       XORDDOS_KSYMS=
+
+
+
# 55808 Variant A
W55808A_FILES="/tmp/.../r
/tmp/.../a"
@@ -11907,6 +11920,13 @@
return
fi

+       # XOR.DDoS Rootkit
+       SCAN_ROOTKIT="XOR.DDoS - Rootkit"
+       SCAN_FILES=${XORDDOS_FILES}
+       SCAN_DIRS=${XORDDOS_DIRS}
+       SCAN_KSYMS=${XORDDOS_KSYMS}
+       scanrootkit
+

# 55808 Trojan - Variant A

--- backdoorports.dat   2010-11-13 20:41:19.000000000 -0300
+++ backdoorports.dat.new       2015-10-02 17:10:24.086000000 -0300
@@ -12,6 +12,7 @@
2001:Scalper:UDP:
2006:CB Rootkit or w00tkit Rootkit SSH server:TCP:
2128:MRK:TCP:
+3502:Possible XOR.DDoS Botnet Malware:TCP:
6666:Possible rogue IRC bot:TCP:
6667:Possible rogue IRC bot:TCP:
6668:Possible rogue IRC bot:TCP:

Apply the patch on the rkhunter script and backdoors.dat files with the following command.

patch < rkhunter.patch

rkhunter.patch output

rkhunter.patch output

Patch is done, now go back to the tarball root directory to continue the install.

cd ..

Install files

Run the installer script with the following parameters to install it under /usr/local.

./installer.sh --install --layout /usr/local

You can also use the --examples flag to show more layout information and examples or and the --show option instead of the --install to show what is to be installed on your layout.

Install Unhide (recommended)

The unhide and unhide-tcp utilities will look for hidden process and ports, while not mandatory, it is highly recommended as most sophisticated rootkits will hide their presence.

First, we need to install GNU Compiler Collection.

yum install gcc

Install glibc-static, needed to create the striped binaries.

yum install glibc-static

Compile unhide-linux.

gcc -Wall -O2 --static -pthread unhide-linux*.c unhide-output.c -o unhide-linux

Compile unhide-tcp.

gcc -Wall -O2 --static unhide-tcp.c unhide-tcp-fast.c unhide-output.c  -o unhide-tcp

Install the files under /usr/local/bin and create a symbolic link to unhide.

cp unhide-linux unhide-tcp /usr/local/bin && cd /usr/local/bin/ && ln -s unhide-linux unhide && cd -

Configure

On this section I will show some of the options found on the rkhunter.conf file, the options are separated in group and their description are simplified, read the actual description on the file and if you are unsure just ignore as default options should be enough, most of them are commented.

You are encouraged to do a first run before do the actual changes on the configuration file, this will give you a better comprehension of how rkhunter works and the possibility to identify some false positives to be whitelisted on the configuration file.

Just call rkhunter with the -c or --check parameters.

rkhunter -c

Running rkhunter

Running rkhunter

As you can see on the image above, there will be some warnings about files like egrep or ifup to be script instead of ELF binaries, however they are legitimate system files and most of the options on the configuration file are about how make rkhunter ignore such occurrences.

Tests

The following options ENABLE_TESTS and DISABLE_TESTS sets what types of testes are to be made, enable all and then disable the undesired ones. It is a good idea to have at least suspscan disabled by default as it is prone to false positives.

ENABLE_TESTS=ALL

DISABLE_TESTS=suspscan

Secure Shell

It's never a good idea to enable root login on SSH connections, use su/sudo instead, otherwise set this to yes.

ALLOW_SSH_ROOT_USER=no

The version 1 of the SSH protocol is known to be insecure, set this to 1 need to ignore this protocol check

ALLOW_SSH_PROT_V1=0

Network ports

Allowed network ports with format  protocol:port

PORT_WHITELIST

Set the whitelist for some programs with the syntax path_to_binary:protocol:port_number

PORT_PATH_WHITELIST=/usr/sbin/squid:TCP:3801

Application Version

This option let you run some outdated applications, this is generally not recommended and you must be sure that the application is safe before you put it on this list.

APP_WHITELIST=openssl:0.9.7d gpg httpd:1.3.29

Sniffers

Allow the use of sniffers, software that capture network packets.

Allow the following process to listen to the network, as the following line.

ALLOWPROCLISTEN=/usr/sbin/snort-plain

This will allow the listed network interface to listen to the network in promiscuous mode.

ALLOWPROMISCIF=eth0

Files

You will need create some exceptions to the tests made by rkhunter, the following options let you to bypass tests to specific objects, such as files, directories.

Allow some hidden directories.

ALLOWHIDDENDIR=/etc/.java

Allow some hidden files.

ALLOWHIDDENFILE=/usr/share/man/man1/..1.gz
ALLOWHIDDENFILE=/usr/share/man/man5/.k5identity.5.gz
ALLOWHIDDENFILE=/usr/share/man/man5/.k5login.5.gz

This whitelist will allow some files to be scripts instead of an ELF  binary.

SCRIPTWHITELIST=/usr/sbin/ifdown
SCRIPTWHITELIST=/usr/sbin/ifup
SCRIPTWHITELIST=/usr/bin/egrep
SCRIPTWHITELIST=/usr/bin/fgrep
SCRIPTWHITELIST=/usr/bin/ldd

Allow file to be world writable.

WRITEWHITELIST=/usr/bin/date

Allow file to have attributes changes.

ATTRWHITELIST=/usr/bin/date

Allow process to query deleted files.

ALLOWPROCDELFILE=/sbin/cardmgr

Log Options

This will define which file to log to.

LOGFILE=/var/log/rkhunter.log

Set this one to 1 if you want to continue logging on the same file every time rkhunter runs, default is 0, that will append '.old' to the log file and create a new one.

APPEND_LOG=0

If you want to keep the log file when there is something wrong, set the following option to 1.

COPY_LOG_ON_ERROR=0

Uncomment and set the log facility if you want to use syslog.

USE_SYSLOG=authpriv.warning

By default, whitelisted itens will report ok on tests, if you want to highlight whitlisted items you must set this option to 1.

WHITELISTED_IS_WHITE=0

Operating System options

Set the package manager option to RPM on Red Hat like systems, which include CentOS.

PKGMGR=RPM

Enable this to report warning when operating system changes version/release.

WARN_ON_OS_CHANGE

Should we update our database when operating system change?

UPDT_ON_OS_CHANGE

Where to find the operating system release file, set to /etc/redhat-release on CentOS.

OS_VERSION_FILE=/etc/redhat-release

Locking

If you are likely to have more than one rkhunter running at the same time you should enable this option to enable the use of lock files and avoid database corruption.

USE_LOCKING=0

If you enabled the use of locks, then you should set a timeout to avoid deadlocks.

LOCK_TIMEOUT

Should we warn about locked sessions?

SHOW_LOCK_MSGS

Startup and Superdeamon

Where is the inetd config file.

INETD_CONF_PATH=/etc/inetd.conf

Which services are allowed to run through the inetd.

INETD_ALLOWED_SVC=/usr/sbin/rpc.metad /usr/sbin/rpc.metamhd

Xinetd config file.

XINETD_CONF_PATH=/etc/xinetd.conf

RC startup files paths.

STARTUP_PATHS=/etc/rc.d /etc/rc.local

Accounts

The file that contains the shadowed passwords.

PASSWORD_FILE=/etc/shadow

Allow user accounts other than root to have UID 0.

UID0_ACCOUNTS=toor rooty

Allow accounts without password.

PWDLESS_ACCOUNTS=abc

Syslog

Syslog config file.

SYSLOG_CONFIG_FILE=/etc/syslog.conf

Allow syslog to log remotely.

ALLOW_SYSLOG_REMOTE_LOGGING=0

Reports

Report the number of warnings?

SHOW_SUMMARY_WARNINGS_NUMBER

Show the total time needed to run the tests?

SHOW_SUMMARY_TIME

To receive mail reports when rkhunter find something you must set the following options as well as to have a mail application.

Who will receive the email.

MAIL-ON-WARNING=your-email@your.domain

Which command used to send email.

MAIL_CMD=mail -s "[rkhunter] Warnings found for ${HOST_NAME}"

Running rkhunter

OK, at this point you should already had run rkhunter at least once, now take a look at some other flags that can be used with rkhunter.

Check Your Changes

After you are done with the configuration, run rkhunter with the -C or --check-config flag to check for any error in the file.

rkhunter -C

Properties Update

Now, and every time you change the configuration file, make sure to update the file properties database.

rkhunter --propupd

Report Warnings Only.

rkhunter --rwo

Sometimes you want to run only a specific test, for this try --list tests to get the names of the available tests and then use the --enable flag followed by the test name.

rkhunter --list tests

rkhunter checking network

rkhunter checking network

The following option will disable the key press prompt.

rkhunter --sk

To run rkhunter on a  cronjob use the --cronjob flag, create the executable file /etc/cron.daily/rkhunter.sh with the following contents to do a daily check

#!/bin/sh

( /usr/local/bin/rkhunter --versioncheck
/usr/local/bin/rkhunter --update

/usr/local/bin/rkhunter --cronjob -c ) >> /dev/null 2>&1

Conclusion

This should get you started with rkhunter, providing you with one more security layer, however this will not be enough if you neglect basic security principles as well as if you put every warning you met on whitelists instead of mitigating the problems. Also have in mind that rkhunter will help you to prevent you machines to become members of a Linux botnet but will not protect your site from being target of a DDoS campaign. Thanks for reading!

The post Hunting XOR DDoS and other Malware with RKHunter on CentOS 7 appeared first on LinOxide.

How to Setup Seafile Secure Cloud Storage on CentOS 7

$
0
0

Hello Everybody, our today's article is on an Open Source Secure Cloud Storage platform that is Seafile. You can use Seafile Storage at your home or in office to synchronize your files and data with PC and mobile devices easily or use its web interface for managing your data files. So, its an ideal Storage solution mostly for small business purposes where you have the flexibility of group sharing and multiple projects, without necessarily using a public server with complete security by providing client-side encryption of data.

You can also choose to host your data on the seafile cloud or run your own local Seafile server by following this installation and configuration guide on RHEL or CentOS 6.6/7.0.

Prerequisites

Seafile secure cloud storage installation setup depends upon the number following prerequisites.

System Update

Login to your CentOS server with root credentials, configure FQDN with a static IP address then run the below command to update your server with lates updates.

# yum update

LAMP Setup

You must have setup your basic LAMP Server on your CentOS server and make sure that its services and working fine. Here in this tutorial we will using Apache web server with MariaDB as a database server.

Python Packages

Seafile storage setup requires some puthon modules that must be installed on your server, otherwise your installation setup will be unsuccessful and your will be asked to install all missing dependencies.

You can install the required python modules by using the following command.

# yum install MySQL-python python-imaging python-simplejson python-setuptools

Download Seafile Server Package

Seafile server package can be downloaded from their official link of Seafile Download Page where you can see its cross platform packages. We will be choosing the generic linux 64-bit package as shown.

Seafile Server Download

You can download this package in temp directory using wget command by providing the complete download path as below.

# wget https://bintray.com/artifact/download/seafile-org/seafile/seafile-server_4.4.1_x86-64.tar.gz

When the downloading of package complete, create a new directory in the web document root directory of your server and extract the seafile server package in it.

# mkdir /var/www/storage/
# tar -zxvf seafile-server_4.4.1_x86-64.tar.gz -C /var/www/storage/

Seafile Installation Setup

To start the installation setup, move to the folder where we extracted the installation package and execute the following script.

[root@centos-seafile seafile-server]# ./setup-seafile-mysql.sh

The script will check out the required dependencies, then you will be as asked Press the Enter key to continue.

Seafile Server Installation

Once you hit the enter key, you will be asked to configure some of its required parameters where you to have to mention your server's name, its FQDN/IP and choose the default port for seafile fileserver.

sseafile server configuration

Then you will be asked to configure your seafile databases, if you have not already created your databases then don't worry and choose the options to create the new databases during the seafile installation setup as shown below.

-------------------------------------------------------
Please choose a way to initialize seafile databases:
-------------------------------------------------------

[1] Create new ccnet/seafile/seahub databases
[2] Use existing ccnet/seafile/seahub databases

[ 1 or 2 ] 1

What is the host of mysql server?
[ default "localhost" ]

What is the port of mysql server?
[ default "3306" ]

What is the password of the mysql root user?
[ root password ]

verifying password of user root ... done

Enter the name for mysql user of seafile. It would be created if not exists.
[ default "root" ] seafile

Enter the password for mysql user "seafile":
[ password for seafile ]

Enter the database name for ccnet-server:
[ default "ccnet-db" ]

Enter the database name for seafile-server:
[ default "seafile-db" ]

Enter the database name for seahub:
[ default "seahub-db" ]

Seafile server Configuration

After hitting the Enter key, the installation process will continue to configure and setup its configuration and database files as below.

Generating ccnet configuration ...

done
Successfully create configuration dir /var/www/storage/ccnet.
Generating seafile configuration ...

Done.
done
Generating seahub configuration ...

----------------------------------------
Now creating seahub database tables ...

----------------------------------------

creating seafile-server-latest symbolic link ... done

Upon successful completion of seafile installation you will be greeted with following usefull information and instructions to continue with other configurations.

Seafile Configurations

Starting Seafile Server

To start the seafile, execute the below seafile script as shown.

[root@centos-seafile seafile-server]# ./seafile.sh start

Starting Seafile Server

Then you will be asked to configured your admin email account, you will be greeted with below successful message.

----------------------------------------
It's the first time you start the seafile server. Now let's create the admin account
----------------------------------------

What is the email for the admin account?
[ admin email ] kashifs@linoxide.com

What is the password for the admin account?
[ admin password ]

Enter the password again:
[ admin password again ]

----------------------------------------
Successfully created seafile admin
----------------------------------------

Seahub is started

Done.

Login to Seahub

Now open any one the web browser to access the seahub dashboard to manage and shared your libraries and folders etc.

Open the URL with your FQDN or Server's IP address with your configured default port and login with admin email address that you created during the seahub server startup.

http://your_servers_ip:8000

Login to Seahub

Creating New Libraries

Upon successful credentials you will be greeted with a Welcome screen and then directed towards its dashboad where will organizes files into libraries and each library can be synced and shared separately. There is an already created personal library but you are now free create more libraries whether it is your personal you for sharing purpose.

Seahub New Library

Uploading your data

TO upload your data like folders or image, you just have to click on that particular folder then then choose from the available options to upload your data as shown.

Seahub uploading data

Seafile Client Installation

Our Seafile Server setup is ready now we will show you its client side installation on Windows 7 Operating system. The seafile client package is available for different operating systems but we will choose for Windows here.

seafile client package

Once the client is downloaded, click on it to run the installation process and choose the appropriate options for its program files and click NEXT key and then click on the install button to start installation as shown below.

Seafile Client Installation

When you finish the installation process you asked to choose the location of any user with sufficient file space for storing seafile libraries.

Seafile Folder location

Adding Seafile Account

To login on the seafile client portal you must have an account configured with your seafile server. You can also add new users from your seafile server. So, on windows we will be using our admin email account to login by providing the following parameters.

Ading seafile account

After adding your account you will be logged in to the seafile client portal and asked to download the default library. Simply click on the yes button to download it on the default location that you chooses in previous step.

Download Default Library

Seafile organizes files by using libraries so after downloading the default library it will create a virtual disk where you will find the default document that contains some information to use seafile. Whenever you need to upload some data, just click on the particular library and click on the Plus icon to upload and share you files or folders.

Uploading data

Conclusion

Congrats, our seafile cloud storage has been all setup. Now you can easily manage your data for remote sharing. Its a great tool to be used because of its awesomeness. So, let's setup your own and feel free to get back to us in case on any issue and leave your valuable comments an suggestions.

The post How to Setup Seafile Secure Cloud Storage on CentOS 7 appeared first on LinOxide.

How to Migrate Container Data Volume to Second Host with Flocker

$
0
0

Flocker is a free and open source software for managing container data volume in a dockerized applications. In native docker technology, if we migrate a container from one server to another new server, the data volume is left behind whereas only container is moved. But with the advancement of technology and heavy development on Docker technology, a new platform was born named as Flocker. It not only moves containers but it helps to migrate both the container and data volume together. This makes flocker data volume which is also known as dataset, pretty portable and can be used with any container in the cluster. This key feature of Flocker makes it very popular among the ops team to run containerized stateful services like databases in production. This tutorial is all about how we can migrate a container from one server to another server along with data volume.

Here are some steps on how we can migrate a container with data volume from one server to another using Flocker.

Prerequisites

First of all, before we get started, we'll need to fulfill some essential things. We'll need to have 3 nodes to do this job. First we'll have a Client Node in which we'll store the configuration files and run flocker-cli on. A client node can be our own laptop, desktop or any other computer or even a server. Next, we'll need 2 other nodes in which we'll run the docker containers using flocker and move a running container along with its data volume without any interruption. Following are the systems that we are going to setup with their ip address which are running Ubuntu 15.04 as their operating system and are in the same flocker cluster.

Client Node 0 Container Node 1 Container Node 2
104.130.26.196 104.130.169.227 104.130.26.245

After we are done with setting up the flocker cluster, we'll now go for running a docker container using flocker.

1. Creating Application File

We'll now create a docker compose file or application file which will define the containers we want to run with their respective configurations such as docker image, name, ports, data volume and more. We'll create the YAML file using a text editor and under the structure of docker compose. To do so, we'll run the following command and start a text editor and create docker-compose.yml file in Node 0.

# nano docker-compose.yml

After opening the text editor, we'll now append the file as shown below.

web:
  image: clusterhq/flask
  links:
   - "redis:redis"
  ports:
   - "80:80"
redis:
  image: redis:latest
  ports:
   - "6379:6379"
  volumes: ["/data"]

Configuring Docker Compose

The above configuration defines that on running the above configuration, it will create 2 containers one named web and another redis. The web one will run a container from an image clusterhq/flash and will expose on port 80 whereas the redis will run a container from the latest release of image redis and will expose on port 6379 with data volumer under /data directory.

2. Creating Deployment File

Next, we'll create another file named flocker-deploy1.yml in which we'll define the most important part of our tutorial, we'll define where those containers will be deployed. Here, we'll define to deploy both of the containers Python Web App (FLASK) and Redis Database Server under same host ie Node 1. To do so, we'll run the following command to open the text editor.

# nano flocker-deploy1.yml

After opening, we'll append the YAML file as shown below.

"version": 1
"nodes":
  "104.130.169.227": ["web", "redis"]
  "104.130.26.245": []

Configuring Flocker Deploy 1

Then, we'll simply save the file and exit.

In above YAML file, we have defined to run the both of the containers ie web and redis to run under the same host ie node 1 without running anything under node 2.

3. Deploying Containers

After we have created those files, we'll now deploy the containers using those YAML files. To do so, we'll simply need to run the following command under sudo or root privilege.

# flocker-deploy control-service flocker-deploy1.yml docker-compose.yml

The cluster configuration has been updated. It may take a short while for changes to take effect, in parti
cular if Docker images need to be pulled.

Deploying Flocker node1

We'll be prompted that it may take some time to get the containers deployed as defined by the above configuration. As we have defined in above configuration, both the FLASK and Redis must be running under the same host ie Node 1. So, we'll get into Node 1 to check if its running both of the containers or not.

4. Inspecting Docker Containers

To check if the Node 1 is running both the containers or not, we'll see the list of running docker containers in Node 1. We can do that by SSH tunneling into Node 1 which has ip address as104.130.169.227 and running docker command to see the list of running containers. To do so, we'll need to run the following command.

# ssh root@104.130.169.227 docker ps

Redis Web Containers Node1

5. Testing the Application

After we get those containers running, we'll surely wanna test the application running in Node 1. To do so, we'll gonna open those ip addresses using a web browser. When we browse http://104.130.169.227/ , we'll see that the visit count is displayed whereas when we browse http://104.130.26.245/ , we'll see that the visit count persists because flocker routes the traffic from either node defined in the Deployment file to the one that has the application. It makes flocker possible to move our containers and its volumes around the cluster without having to update any DNS or application settings.

Webapp FLASK Node1

6. Recreating Deployment File

Now, we'll finally rewrite the deployment file in order to move the container with its data volume. We'll gonna create a new file or edit the previous file, append and save as flocker-deploy2.yml.

# nano flocker-deploy2.yml

Then, we'll append the file as shown below.

"version": 1
"nodes":
  "104.130.169.227": ["web"]
  "104.130.26.245": ["redis"]

Configuring Flocker Deploy2

This will define the web container to run under node 1 and redis container to run under node 2.

7. Moving Container with Data Volume

Finally, we'll now deploy the newly created deploy YAML file which will migrate the running redis container from Node 1 to Node 2 including its data volume. This will keep the web container in the same node as before ie Node 1 without affecting the application.

# flocker-deploy control-service flocker-deploy2.yml docker-compose.yml

The cluster configuration has been updated. It may take a short while for changes to take effect, in particular if Docker images need to be pulled.

Deploying Flocker Node2

8. Inspecting the Migration

To check if the redis container is really migrated or not, we can see that by listing the running containers in those nodes. First, we'll see in Node 2 if the redis server is migrated or not via SSH tunneling by running the following command.

# ssh root@104.130.26.245 docker ps

Redis Container Node2

As we can see, there is only Redis server container running in this node whereas the web container is not running in this node.

Now, to cross check, we'll gonna see what containers are running in the Node 1.

# ssh root@104.130.169.227 docker ps

Webapp Container Node1

And finally, we see that there is no Redis server running in this node whereas there is only web container running.

9. Checking the Application

We'll now check the application whether its running fine as expected or not. To do so, we'll open a web browser and point it to both of the nodes ie http://104.130.169.227/ and http://104.130.26.245/ . Here, we see that the count still persists while pointing on Node 1, even though the container with the volume has moved between hosts. And we also see that the visit count still persists on Node 2 even though the application is no longer running on that host. This verifies that the redis container has been successfully migrated with its data volume.

Webapp FLASK node2

Conclusion

This tutorial is about how we easily we can migrate a container with its data volume in a flocker cluster from one host to another within the same cluster. Flocker can be used with popular container managers or orchestration tools like  Docker Engine, Docker Swarm, Docker Compose and in different platforms like Amazon AWS, RackSpace, OpenStack, Vagrant. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you !

The post How to Migrate Container Data Volume to Second Host with Flocker appeared first on LinOxide.

How to Install FAMP Stack and Mod Security on FreeBSD 10.2

$
0
0

FAMP Stack or FreeBSD with Apache, MariaDB and PHP is a group of opensource software to run application based on php to your browser. FAMP similiar with LAMP (Linux Apache MAriaDB/MySQL and PHP) on linux server.

Mod Security is a Open source intrusion detection and prevention engine for web server. Support for Apache Nginx and IIS on windows server. It is one of the apache modules to prevent from hackers and other malicious attack like SQL Injection, XSS, LFI(Local File Inclusion), RFI(Remote File Inclusion) etc.

In this tutorial we will guide about Installation of FAMP Stack with FreeBSD 10.2, and then give you sample configuration of virtualhost on apache webserver. Next we will install and configure mod security to work with the FAMP Stack and activate on the virtualhost that have been created.

Step 1 - Update System

Please log in to your freebsd server with ssh and update your system with command :

freebsd-update fetch
freebsd-update install

Step 2 - Install and Configure Apache

Apache is the one of the best and popular web server, support for Linux windows and Mac OS. Apache developed by an open community of developers under the Apache Software Foundation. Support some language interfaces support Perl, Python, Tcl, and PHP.

We will install apache24 with pkg command :

pkg install apache24

Please go to the apache configuration directory "/usr/local/etc/apache24", and then edit a file "httpd.conf" with nano editor :

cd /usr/local/etc/apache24
nano httpd.conf

Change the value of "ServerAdmin" on line 210 and "ServerName" on line 219 :

ServerAdmin im@localhost
.....
ServerName localhost:80

Next, before run apache webserver, we need to add apache to the start up/boot time with "sysrc" command :

sysrc apache24_enable=yes

Now start Apache webserver :

service apache24 start

And open your browser and visit the server IP 192.168.1.112 :

Apache Start

Step 3 - Install and Configure MariaDB

MariaDB instead of MySQL develop and maintain by MySQL Developer under the GNU GPL. For from MySQL MySQL relational database management system.

We will install mariadb with pkg command :

pkg install mariadb100-server

That command will install mariadb100-client too.

Now copy the mariadb file configuration from "/usr/local/share/mysql/" to "/usr/local/etc/" :

cp /usr/local/share/mysql/my-medium.cnf /usr/local/etc/my.cnf

Next, enable mariadb to start on boot time with sysrc command :

sysrc mysql_enable=yes

and the last, start mariadb :

service mysql-server start

So now you need to configure username and password for mariadb/mysql server. configure with command :

mysql_secure_installation

Enter current password for root (enter for none): PRESS ENTER
OK, successfully used password, moving on...

Set root password? [Y/n] Y
New password: ENTER YOUR PASSWORD
Re-enter new password: ENTER YOUR PASSWORD
Password updated successfully!

Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] Y
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y

Now try access your mariadb/mysql shell :

mysql -u root -p
ENTER YOUR PASSWORD

MariaDB started

Step 4 - Install and Configure MariaDB

In this tutorial we will use version of php 5.6. install it with pkg command include with mod_php56 php56-mysql php56-mysqli php56-curl.

pkg install mod_php56 php56-mysql php56-mysqli php56-curl

Now copy php configuration file "php.ini-production" to "php.ini" in directory "/usr/local/etc/" :

cd /usr/local/etc/
cp php.ini-production php.ini

Edit php.ini files and add your timezone in line 926 :

nano php.ini

date.timezone = Asia/Jakarta

Next, configure php to work with apache, so you need to edit the apache configuration file and then add php configuration there.

To do it you must go to the apache configuration directory and edit "httpd.conf" with nano editor :

cd /usr/local/etc/apache24/
nano httpd.conf

Add the php configuration to under line 288 :

.....
<Files ".ht*">
Require all denied
</Files>

<FilesMatch "\.php$">
SetHandler application/x-httpd-php
</FilesMatch>

<FilesMatch "\.phps$">
SetHandler application/x-httpd-php-source
</FilesMatch>
.....

and add index.php on the dir_module directive :

<IfModule dir_module>
DirectoryIndex index.php index.html
</IfModule>

Save and Exit

PHP work with Apache

Step 5 - Configure Apache VirtualHost

In this tutorial we will create a virtualhost called "saitama.me.conf" with the domain "saitama.me".

Virtualhost configuration file stored at "/usr/local/etc/apache24/extra/" directory. But in this tutorial we will create new directory for virtualhost, so make you easy to configure your virtualhost if you have many configuration file.

Create new directory  "virtualhost" in apache configuration directory :

cd /usr/local/etc/apache24/
mkdir virtualhost

Now create new file "saitama.me.conf" :

Add a virtualhost configuration below :

<VirtualHost *:80>
ServerAdmin im@saitama.me
# Directory for the file stored
DocumentRoot "/usr/local/www/saitama.me"
#Domain
ServerName saitama.me
ServerAlias www.saitama.me
ErrorLog "/var/log/saitama.me-error_log"
CustomLog "/var/log/saitama.me-access_log" common

<Directory "/usr/local/www/saitama.me">
Options All
AllowOverride All
# The syntax is case sensitive!
Require all granted
</Directory>
</VirtualHost>

Next include your virtualhost configuration to the apache "httpd.conf" file :

cd /usr/local/etc/apache24/
nano httpd.conf

Add this to the end of the line :

Include etc/apache24/virtualhost/*.conf

Next, Create new directory for the virtualhost that we created on the "/usr/local/www/" :

mkdir -p /usr/local/www/saitama.me
cd /usr/local/www/saitama.me

And create new file "index.php" and give php info script, you can do it with "echo" command :

echo '<?php phpinfo(); ?>' > index.php

Now restart your apache and then open your browser "www.saitama.me" :

service apache24 restart

and you can see the php info :

php info

Step 6 - Install and Configure Mod Security

Mod Security is part of apache modules, so you can install it from the repository. You can install from the source, but we here use pkg command to install from the repository :

pkg install ap24-mod_security-2.9.0

Now load new module "unique_id" that needed by mod security by editing the apache configuration file "httpd.conf" and uncomment the line 120  :

cd /usr/local/etc/apache24/
nano httpd.conf

LoadModule unique_id_module libexec/apache24/mod_unique_id.so

Save and Exit.

And if you have done, please clone the owasp modsecurity Core Rules Set(CRS) with git command to the crs directory :

cd /usr/local/etc/
git clone https://github.com/SpiderLabs/owasp-modsecurity-crs crs

Now go to the crs directory and copy the example configuration file :

cd crs/
cp modsecurity_crs_10_setup.conf.example modsecurity_crs_10_setup.conf

Next, load the modsecurity module with owasp crs rules by create new file "000_modsecurity.conf" on "modules.d" directory :

cd /usr/local/etc/apache22/modules.d/
nano 000_modsecurity.conf

Paste configuration below :

# Load ModSecurity
LoadModule security2_module libexec/apache24/mod_security2.so

<IfModule security2_module>
# Include ModSecurity configuration
Include /usr/local/etc/modsecurity/modsecurity.conf

# Include OWASP Core Rule Set (CRS) configuration and base rules
Include /usr/local/etc/crs/modsecurity_crs_10_setup.conf
Include /usr/local/etc/crs/base_rules/*.conf

# Remove Rule by id
SecRuleRemoveById 981173
</IfModule>

Save and Exit.

Step 7 - Adding Mod Security to the VirtualHost

To configure a virtualhost with mod security, you need to edit the virtualhost file :

cd /usr/local/etc/apache24/virtualhost/
nano saitama.me.conf

Inside Directory directive, add script below :

......

<IfModule security2_module>
SecRuleEngine On
</IfModule>

......

And now restart apache web server :

service apache24 restart

Note :

If you have an error like this :

[unique_id:alert] [pid 4372] (EAI 8)hostname nor servname provided, or not known: AH01564: unable to find IPv4 address of "YOURHOSTNAME"

please add your hostname to the hosts file :

nano /etc/hosts

Add your hostname

127.0.0.1           YOURHOSTNAME

Step 8 - Testing Mod Security

Edit the file "modsecurity.conf" in the mod security directory "/usr/local/etc/modsecurity/" :

cd /usr/local/etc/modsecurity
nano modsecurity.conf

Change the value of "SecRuleEngine " to the "On" :

SecRuleEngine On

Save and Exit.

Restart Apache :

service apache24 restart

See the apache log file to ensure the mod security is loaded :

tail -f /var/log/httpd-error.log

Mod Security Loaded

Another test in virtualhost with SQL Injection attack on wordpress plugins :

Another Test VirtualHost

Mod Security and Apache running successfully.

Conclusion

FAMP Stack or Apache MariaDB and PHP on FreeBSD instead of LAMP on Linux server. It is easy to install and Configure. You can Install it with pkg command or if you have time, you can compile it from "/usr/ports" directory. Mod Security is web application firewall that prevent you from hacker and the malicious attacks like SQL Injection. You can define your rule and then add it to work with apache for your web application security.

 

The post How to Install FAMP Stack and Mod Security on FreeBSD 10.2 appeared first on LinOxide.

How to Install and Configure PostgreSQL with phpPgAdmin on CentOs 7

$
0
0

Hi everybody! Our today's article is on PostgreSQL with phpPgAdmin installation setup on CentOS 7. PostgreSQL is one of the major and important open-source relational database management systems that have helped to shape the world of application development with advanced, SQL-compliant. The main advantage with using PostgreSQL is that it requires very minimum maintenance efforts because of its stability and the applications based on PostgreSQL has low cost of ownership in comparison with other database management systems. Its designed to be extensible in a way that you can define your own data types, index types, functional languages, etc.

Managing databases using individual SQL statements is a difficult task , so in this article we will also show you one of the best and most popular graphical user interface for managing a PostgreSQL database, that is phpPgAdmin.

PhpPgAdmin is a web-based GUI application that makes it simple for administering your PostgreSQL databases. phpPgAdmin will let you add, remove and manage databases, tables, and entries; run specific SQL queries, backup the database, search and import record, and much more.

Prerequisites

Before starting the installation of PotgreSQL and phpPgAdmin make sure that you have root access on your CentOS server and your are connected to the internet for the downloading the packages.

After login to your server, run the command below to update your centos 7 server with latest patches.

#yum update

If you going to setup PostgreSQL and phpPgAdmin on production environment with firewall and SELinux enabled, then make sure to allow the following default ports that will be used for postgreSQL and apache.

# firewall-cmd --permanent --add-port=5432/tcp
# firewall-cmd --permanent --add-port=80/tcp
# firewall-cmd --reload

To allow in SELinux run the below command.

# setsebool -P httpd_can_network_connect_db 1

PostgreSQL Installation

By default centos 7 comes with PostgreSQL Version 9.2.1 that can be installed by using the simple yum command while the current latest PostgreSQL Version is 9.4.5. So, in this tutorial we will be installing the latest version of PostgreSQL by using the PostgreSQL Yum Repository.

Installing PostgreSQL Repository
To get the latest yum repository for latest PostgreSQL package open the PostgreSQL Download Page or copy the link and run the below wget command.

# wget http://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-redhat94-9.4-1.noarch.rpm

PostgreSQL Latest Repo

After downloading the rpm repository we have to install this rpm repository first before starting the PotgreSQL installation by using the below command.

# rpm -i pgdg-redhat94-9.4-1.noarch.rpm
# yum install postgresql94-server postgresql94-contrib

PostgreSQL Installation

After running the above command there will be number of following package will be installed including few dependencies. To proceed the installation process press the "Y" key to continue as shown.

Dependencies Resolved
========================================================================================
Package Arch Version Repository Size
========================================================================================
Installing:
postgresql94-contrib x86_64 9.4.5-1PGDG.rhel7 pgdg94 610 k
postgresql94-server x86_64 9.4.5-1PGDG.rhel7 pgdg94 3.8 M
Installing for dependencies:
libxslt x86_64 1.1.28-5.el7 base 242 k
postgresql94 x86_64 9.4.5-1PGDG.rhel7 pgdg94 1.0 M
postgresql94-libs x86_64 9.4.5-1PGDG.rhel7 pgdg94 209 k

Transaction Summary
=======================================================================================
Install 2 Packages (+3 Dependent packages)

Total download size: 5.9 M
Installed size: 25 M
Is this ok [y/d/N]: y

Once the installation is complete, run the below command to initialize the database.

# /usr/pgsql-9.4/bin/postgresql94-setup initdb
Initializing database ... OK

Starting Database Service

To start the PostgreSQL service and to configure it for auto enable at boot up run the following commands and then check the status, it should be up and enabled.

# systemctl start postgresql-9.4
# systemctl enable postgresql-9.4

Starting PostgreSQL services

Using PostgreSQL Command line

During the installation process a new user was created by default with name "postgres" that will be used for administering PostgreSQL databases.

Let's switch user to the PostgreSQL user and connect to PostgreSQL command line interface for managing your database.

# su - postgres
-bash-4.2$ psql
psql (9.4.5)
Type "help" for help.

you can get more help on using the PostgreSQL database by typing help command as shown in the image.

Connecting to PostgreSQL

Run the following command to update the default password of postgres user.

postgres=# \password postgres
Enter new password:*****
Enter it again:*****

Now we will create a new user and database using the PostgreSQL command line. To do let's run the below commands.

[root@centos-7 ~]# su - postgres
Last login: Sat Oct 10 19:26:10 BST 2015 on pts/1
-bash-4.2$ createuser kashif
-bash-4.2$ createdb testdb
-bash-4.2$ psql
postgres=# alter user kashif with encrypted password 'kashif123';
ALTER ROLE
postgres=# grant all privileges on database testdb to kashif;
GRANT

To list all the databases created on your system use the "\list" or "\l" command and to connect to a database use "\c db_name" as shown below.

Using Postgresql DB

Installing phpPgAdmin

In this section we are now going to setup Web based PostgreSQL administration tool. To do so first we have to install its packages that can done by using below yum command.

# yum install phpPgAdmin httpd

After running this command you will see a number of dependencies that will be required for installing the phpPgadmin and apache web server. So, to proceed forward choose the "Y" key to accept the changes and to complete the installation setup.

Dependencies Resolved
=======================================================================================
Package Arch Version Repository Size
=======================================================================================
Installing:
httpd x86_64 2.4.6-31.el7.centos.1 updates 2.7 M
phpPgAdmin noarch 5.1-2.rhel7 pgdg94 658 k
Installing for dependencies:
apr x86_64 1.4.8-3.el7 base 103 k
apr-util x86_64 1.5.2-6.el7 base 92 k
httpd-tools x86_64 2.4.6-31.el7.centos.1 updates 79 k
libzip x86_64 0.10.1-8.el7 base 48 k
mailcap noarch 2.1.41-2.el7 base 31 k
php x86_64 5.4.16-36.el7_1 updates 1.4 M
php-cli x86_64 5.4.16-36.el7_1 updates 2.7 M
php-common x86_64 5.4.16-36.el7_1 updates 563 k
php-pdo x86_64 5.4.16-36.el7_1 updates 97 k
php-pgsql x86_64 5.4.16-36.el7_1 updates 84 k

Transaction Summary
=======================================================================================
Install 2 Packages (+10 Dependent packages)

Total download size: 8.5 M
Installed size: 30 M
Is this ok [y/d/N]:y

phpPgAdmin Configuration

After installing the required packages, we will configure the phpPgAdmin with required parameters to allow access from the remote location as by default it will be only accessible through localhost.

# vim /etc/httpd/conf.d/phpPgAdmin.conf

phpPgAdmin Configuration

Now Open the below configuration file using any editor and read it carefully before making any changes to it. Most of the parameters and this file are well explained and configured, but we only need to update some the following parameters.

# vim /var/lib/pgsql/9.4/data/pg_hba.conf

Postgres MD Authentication conf

# vim /var/lib/pgsql/9.4/data/postgresql.conf

Postgresql Connection settings

# vim /etc/phpPgAdmin/config.inc.php

// Hostname or IP address for server. Use '' for UNIX domain socket.
// use 'localhost' for TCP/IP connection on this computer
$conf['servers'][0]['host'] = 'localhost';

// Database port on server (5432 is the PostgreSQL default)
$conf['servers'][0]['port'] = 5432;

$conf['owned_only'] = true;

Save the changes and then restart both the services of PostgreSQL and Apache.

# systemctl restart postgresql-9.4
# systemctl restart httpd

phpPgAdmin Web Console

Let's open the below URL to access the phpPgAdmin console as shown below.

http://your_servers_ip/phpPgAdmin/

phpPgAdmin Web Console

To login into the PostgreSQL simply click on the top left icon as shown and provide your credentials as created earlier.

PostgreSQL Web Login

Upon successful login, you will get access to create and manage your databases from phpPgAdmin console.

Using phpPgAdmin

Conclusion

At the end of this article you learned about the installation and configuration of PostgreSQL with phpPgAdmin on CentOS 7. Still this was the first step in the world of PostgreSQL as there are its alot features upon them you have to work on as it has alot of awesome features like point in time recovery, tablespaces, asynchronous replication, Multi-Version Concurrency Control (MVCC), and write ahead logging for fault tolerance. So, we hope you find this article much helpful for you to start database administration with PostgreSQL.

The post How to Install and Configure PostgreSQL with phpPgAdmin on CentOs 7 appeared first on LinOxide.


How to Install Ghost with Nginx on FreeBSD 10.2

$
0
0

Node.js is open source runtime environment for developing the server-side applications. Node.js application is written in javascript and can be run on the server that running Node.js runtime. It is cross-platform runtime, running on Linux, Windows, OSX, IBM AIX, including FreeBSD. Node.js was created by Ryan Dahl and other developer working at Joyent on 2009. It is designed to build scalable network applications.

Ghost is blogging platform coded in Node.js. It is open source publishing platform with beautifully designed, user-friendly, and free. It is allows you to easily publish your content on web, or create your portofolio website.

In this tutorial we will install a Ghost with Nginx as our web server on FreeBSD. We will install Node.js, Npm, nginx and sqlite3 on FreeBSD 10.2.

Step 1 - Install Node.js npm and Sqlite3

If you want to running ghost on your server, you must install node.js. In this section we will install node.js from the freebsd ports collection, please go to the ports directory "/usr/ports/www/node" and install by running command "make".

cd /usr/ports/www/node
make install clean

If you've done with node.js installation, please switch to the npm directory and install it. npm is Package manager for installs, publishes and manages node programs.

cd /usr/ports/www/npm/
make install clean

Next, please install sqlite3. By default ghost is use sqlite3 as the database system, but it is support too for mysql/mariadb and postgresql. We will use sqlite3 as the default database.

cd /usr/ports/databases/sqlite3/
make install clean

If all is installed, please check the version of node.js and npm :

node --version
v0.12.6

npm --version
2.11.3

sqlite3 --version
3.8.10.2

node and npm version

Step 2 - Add Ghost User

We will install and running ghost under normal users called "ghost". please create new user with "adduser" command :

adduser ghost
FILL With Your INFO

Add user Ghost

Step 3 - Installing Ghost

We will install ghost under "/var/www/" directory, so let's create it and then go to the installation directory :

mkdir -p /var/www/
cd /var/www/

Download ghost latest version with wget command :

wget --no-check-certificate https://ghost.org/zip/ghost-latest.zip

Extract it to the directory called "ghost" :

unzip -d ghost ghost-latest.zip

Next, change the owner as user "ghost", we will run and install it under that user.

chown -R ghost:ghost ghost/

If all is done, switch the user to "ghost" by typing command below :

su - ghost

Then go to the installation directory "/var/www/ghost/" :

cd /var/www/ghost/

And before install ghost, we need a sqlit3 modules for node js, install it with npm command :

setenv CXX c++ ; npm install sqlite3 --sqlite=/usr/local

Note : Run as "ghost" user, not root user.

And now we ready to install ghost, please install it with npm command :

npm install --production

Next, copy the configuration file "config.example.js" to "config.js", then edit it with nano editor :

cp config.example.js config.js
nano -c config.js

change on line 25 server block :

host: '0.0.0.0',

Save and exit.

Now run ghost with command below :

npm start --production

And test by visiting the server ip and port 2368.

Ghost Installed

Ghost is installed on directory "/var/www/ghost", under user "ghost".

Step 4 - Run Ghost as FreeBSD Services

To run an application as service on freebsd, you need to add the script to the rc.d directory. we will create new service script for ghost in directory "/usr/local/etc/rc.d/".

Before we create the services script, we need to install a node.js module for running ghost as service, install forever module with npm command as sudo/root privileges :

npm install forever -g

Now please go to the rc.d directory and create new file called ghost there :

cd /usr/local/etc/rc.d/
nano -c ghost

Paste service script below :

#!/bin/sh

# PROVIDE: ghost
# KEYWORD: shutdown
PATH="/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin"

. /etc/rc.subr

name="ghost"
rcvar="ghost_enable"
extra_commands="status"

load_rc_config ghost
: ${ghost_enable:="NO"}

status_cmd="ghost_status"
start_cmd="ghost_start"
stop_cmd="ghost_stop"
restart_cmd="ghost_restart"

ghost="/var/www/ghost"
log="/var/log/ghost/ghost.log"
ghost_start() {
sudo -u ghost sh -c "cd $ghost && NODE_ENV=production forever start -al $log index.js"
}

ghost_stop() {
sudo -u ghost sh -c "cd $ghost && NODE_ENV=production forever stop index.js"
}

ghost_status() {
sudo -u ghost sh -c "NODE_ENV=production forever list"
}

ghost_restart() {
ghost_stop;
ghost_start;
}

run_rc_command "$1"

Save and exit.

Next, make ghost service script an executable :

chmod +x ghost

and create new directory and file for ghost log, and change the owner to ghost user :

mkdir -p /var/www/ghost/
touch /var/www/ghost/ghost.log
chown -R /var/www/ghost/

And the last if you want to run ghost service, you need to add ghost service to the boot time/startup application with sysrc command :

sysrc ghost_enable=yes

and start ghost with :

service ghost start

Other command :

service ghost stop
service ghost status
service ghost restart

Ghost service command

Step 5 - Install and Configure Nginx for Ghost

By default, ghost running standalone,, you can run it without Nginx, apache or IIS webserver. But in this tutorial we will install and configre nginx to work with Ghost.

Please install nginx from the freebsd repository with pkg command :

pkg install nginx

Next, go to nginx configuration directory and make new directory for virtualhost configuration.

cd /usr/local/etc/nginx/
mkdir virtualhost/

go to the virtualhost directory, make new file called ghost.conf with nano editor :

cd virtualhost/
nano -c ghost.conf

Paste virtualhost configuration below :

server {
listen 80;

#Your Domain
server_name ghost.me;

location ~* \.(?:ico|css|js|gif|jpe?g|png|ttf|woff)$ {
access_log off;
expires 30d;
add_header Pragma public;
add_header Cache-Control "public, mustrevalidate, proxy-revalidate";
proxy_pass http://127.0.0.1:2368;
}

location / {
add_header X-XSS-Protection "1; mode=block";
add_header Cache-Control "public, max-age=0";
add_header Content-Security-Policy "script-src 'self' ; font-src 'self' ; connect-src 'self' ; block-all-mixed-content; reflected-xss block; referrer no-referrer";
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options DENY;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:2368;
}

location = /robots.txt { access_log off; log_not_found off; }
location = /favicon.ico { access_log off; log_not_found off; }

location ~ /\.ht {
deny all;
}

}

Save and exit.

To activate the virtualhost configuration, you need to include that file to the nginx.conf. please go to nginx configuration directory and edit nginx.conf file :

cd /usr/local/etc/nginx/
nano -c nginx.conf

Before the last line, include the virtualhost configuration directory :

[......]

include virtualhost/*.conf;

}

Save and exit.

Test nginx configuration with command "nginx -t", if there is no error, add nginx to the start up with command sysrc :

sysrc nginx_enable=yes

and start nginx :

service nginx start

Now test all nginx and virtualhost configuration. please open your browser and type the : ghost.me

ghost.me successfully

Ghost.me is running successfully.

If you want to check the nginx server, use "curl" command.

ghost and nginx test

Ghost is running with nginx.

Conclusion

Node.js is runtime environment created by Ryan Dahl for creating and developing scalable server-side applications. Ghost is open-source blogging platform coded in node.js, it is come with beautifully designed and easy to use for everyone. By default, the ghost is a web application that can stand on its own, does not require a webserver apache, nginx or IIS, but we can also integrate with a web server(In this tutorial use Nginx). Sqlite is default database used by ghost, but it supupport too for mysql/mariadb and postgresql. Ghost is fast and easy to use for you and configure.

The post How to Install Ghost with Nginx on FreeBSD 10.2 appeared first on LinOxide.

How to Install Cockpit on Linux CentOS 7

$
0
0

Cockpit is an easy to use server administrator for Linux based systems. It is a free software which is released under LGPL v2.1.  Its purpose is to be able to manage multiple servers in a user-friendly manner.  Unlike other tools, it does not go deep into server configuration but tries to simplify the server administration especially for beginners.  Cockpit is useful in performing simple tasks like starting and stopping of different services, administering storage, journal inspection etc. It makes use of systemd underneath.

Installing Cockpit

I have used a CentOS 7 system in this article and as Cockpit is not available in the CentOS repository, it needs to be cloned from the sig-atomic-buildscripts repository.

[root@ceph-storage ~]# git clone https://github.com/baude/sig-atomic-buildscripts
Cloning into 'sig-atomic-buildscripts'...
remote: Counting objects: 95, done.
remote: Total 95 (delta 0), reused 0 (delta 0), pack-reused 95
Unpacking objects: 100% (95/95), done.

Now you can install it using yum:

yum install cockpit

If you are using Ubuntu, execute the following commands:

sudo add-apt-repository ppa:jpsutton/cockpit

sudo apt-get update

sudo apt-get install cockpit

Enable cockpit service

[root@ceph-storage ~]# systemctl enable cockpit.socket
ln -s '/usr/lib/systemd/system/cockpit.socket' '/etc/systemd/system/sockets.target.wants/cockpit.socket'

If firewall is enabled in your system, you need to add Cockpit to the list of trusted services and restart firewall.

[root@ceph-storage ~]# firewall-cmd --peranent --zone=public --add-service=cockpit

[root@ceph-storage ~]#firewall-cmd --reload

Start the service

[root@ceph-storage ~]# systemctl start cockpit.socket

If you are on CentOS, you will need another step before you start using Cockpit. We need to modify the cockpit service file to disable SSL as there seems to be some issue with this. For this, edit the file /usr/lib/systemd/system/cockpit.service and change the line starting with ExecStart to the following:

ExecStart=/usr/libexec/cockpit-ws --no-tls

Please note that this work around may not be recommended on a production environment.  After this, reload systemd and restart cockpit.

[root@ceph-storage ~]# systemctl daemon-reload

[root@ceph-storage ~]# systemctl restart cockpit

Now you are ready to use the Cockpit GUI.

Web Interface

The Cockpit web interface can be accessed by using the server's ip address with port 9090

https://server-ip:9090

Login screen for Cockpit

You can login as root and start administering the servers. Once logged in, you will notice the below screen which displays an overview of CPU, Memory, Network Traffic and Disk I/O usage.

System output

Moving to the Services section, you have different tabs here namely Targets, System Services, Sockets, Timers and Paths.  They show the different system services, whether they are enabled, disabled, active, inactive etc.

system-services

Socket services

Containers part shows if Docker is installed / activated or not. If not, you can install / activate it from here.

Container dashboard

The journaling, networking and storage display the different logs, network and storage usage details respectively.

System logs  Network usageStorage details

Under the Tools section, we have Administrator Accounts using which we can either create new accounts or switch between different accounts.

Tools also provides a working console for the administrators.

Tools

Conclusion

Cockpit provides a pretty neat and simple user interface for new admins to manage Linux servers. But remember that it is accessible only via the web. You can visit its official page for more details. As this is relatively new, it might take some time before it gets wide-spread support.

The post How to Install Cockpit on Linux CentOS 7 appeared first on LinOxide.

How to Install Redis Server on CentOS 7

$
0
0

Hi everyone, today Redis is the subject of our article, we are going to install it  on CentOS 7. Build sources files, install the binaries, create and install files. After installing its components, we will set its configuration as well as some operating system parameters to make it more reliable and faster.

Runnins Redis

Redis server

Redis is an open source multi-platform data store written in ANSI C, that uses datasets directly from memory achieving extremely high performance. It supports various programming languages, including Lua, C, Java, Python, Perl, PHP and many others. It is based on simplicity, about 30k lines of code that do "few" things, but do them well. Despite you work on memory, persistence may exist and it has a fairly reasonable support for high availability and clustering, which does good in keeping your data safe.

Building Redis

There is no official RPM package available, we need to build it from sources, in order to do this you will need install Make and GCC.

Install GNU Compiler Collection and Make with yum if it is not already installed

yum install gcc make

Download the tarball from redis download page.

curl http://download.redis.io/releases/redis-3.0.4.tar.gz -o redis-3.0.4.tar.gz

Extract the tarball contents

tar zxvf redis-3.0.4.tar.gz

Enter Redis the directory we have extracted

cd redis-3.0.4

Use Make to build the source files

make

Install

Enter on the src directory

cd src

Copy Redis server and client to /usr/local/bin

cp redis-server redis-cli /usr/local/bin

Its good also to copy  sentinel, benchmark and check as well.

cp redis-sentinel redis-benchmark redis-check-aof redis-check-dump /usr/local/bin

Make Redis config directory

mkdir /etc/redis

Create a working and data directory under /var/lib/redis

mkdir -p /var/lib/redis/6379

System parameters

In order to Redis work correctly you need to set some kernel options

Set the vm.overcommit_memory to 1, which means always, this will avoid data to be truncated, take a look here for more.

sysctl -w vm.overcommit_memory=1

Change the maximum of backlog connections some value higher than the value on tcp-backlog option of redis.conf, which defaults to 511. You can find more on sysctl  based ip networking "tunning" on kernel.org  website.

sysctl -w net.core.somaxconn=512.

Disable transparent huge pages support, that is known to cause latency and memory access issues with Redis.

echo never > /sys/kernel/mm/transparent_hugepage/enabled

redis.conf

Redis.conf is the Redis configuration file, however you will see the file named as 6379.conf here, where the number is the same as the network port is listening to. This name is recommended if you are going to run more than one Redis instance.

Copy sample redis.conf to /etc/redis/6379.conf.

cp redis.conf /etc/redis/6379.conf

Now edit the file and set at some of its parameters.

vi /etc/redis/6379.conf

daemonize

Set daemonize to no, systemd need it to be in foreground, otherwise Redis will suddenly die.

daemonize no

pidfile

Set the pidfile to redis_6379.pid under /var/run.

pidfile /var/run/redis_6379.pid

port

Change the network port if you are not going to use the default

port 6379

loglevel

Set your loglevel.

loglevel notice

logfile

Set the logfile to /var/log/redis_6379.log

logfile /var/log/redis_6379.log

dir

Set the directory to /var/lib/redis/6379

dir /var/lib/redis/6379

Security

Here are some actions that you can take to enforce the security.

Unix sockets

In many cases, the client application resides on the same machine as the server, so there is no need to listen do network sockets. If this is the case you may want to use unix sockets instead, for this you need to set the port option to 0, and then enable unix sockets with the following options.

Set the path to the socket file

 unixsocket /tmp/redis.sock

Set restricted permission to the socket file

unixsocketperm 700

Now, to have access with redis-cli you should use the -s flag pointing to the socket file

redis-cli -s /tmp/redis.sock

requirepass

You may need remote access, if so,  you should use a password, that will be required before any operation.

requirepass "bTFBx1NYYWRMTUEyNHhsCg"

rename-command

Imagine the output of the next command. Yes, it will dump the configuration of  the server, so you should deny access to this kind information whenever is possible.

CONFIG GET *

To restrict, or even disable this and other commands by using the rename-command. You must provide a command name and a replacement. To disable, set the replacement string to "" (blank), this is more secure as it will prevent someone from guessing the command name.

rename-command FLUSHDB "FLUSHDB_MY_SALT_G0ES_HERE09u09u"
rename-command FLUSHALL ""
rename-command CONFIG "CONFIG_MY_S4LT_GO3S_HERE09u09u"

Access Redis through unix with password and command changes

Access through unix sockets with password and command changes

Snapshots

By default Redis will periodically dump its datasets to dump.rdb on the data directory we set. You can configure how often the rdb file will be updated  by the save command, the first parameter is a timeframe in seconds and the second is a number of changes performed on the data file.

Every 15 hours if there was at least 1 key change

save 900 1

Every 5 hours if there was at least 10 key changes

save 300 10

Every minute if there was at least 10000 key changes

save 60 10000

The /var/lib/redis/6379/dump.rdb file contains a dump of the dataset on memory since last save. Since it creates a temporary file and then replace the original file, there is no problem of corruption and you can always copy it directly without fear.

Starting at boot

You may use systemd to add Redis to the system startup

Copy sample init_script to /etc/init.d, note also the number of the port on the script name

cp utils/redis_init_script /etc/init.d/redis_6379

We are going to use systemd, so create a unit file named redis_6379.service under /etc/systems/system

vi /etc/systemd/system/redis_6379.service

Put this content, try man systemd.service for details

[Unit]
Description=Redis on port 6379

[Service]
Type=forking
ExecStart=/etc/init.d/redis_6379 start
ExecStop=/etc/init.d/redis_6379 stop

[Install]
WantedBy=multi-user.target

Now add the memory overcommit and maximum backlog options we have set before to the /etc/sysctl.conf file.

vm.overcommit_memory = 1

net.core.somaxconn=512

For the transparent huge pages support there is no sysctl directive, so you can put the command at the end of /etc/rc.local

echo never > /sys/kernel/mm/transparent_hugepage/enabled

Conclusion

That's enough to start, with these settings you will be able to deploy Redis server for many simpler scenarios, however there is many options on redis.conf for more complex environments. On some cases, you may use replication and Sentinel to provide high availability, split the data across servers, create a cluster of servers. Thanks for reading!

The post How to Install Redis Server on CentOS 7 appeared first on LinOxide.

How to Install CSF Firewall on CentOS 7

$
0
0

CSF stands for ConfigServer Security and Firewall is one the most useful Open Source security application for linux operating systems that is used as a Packet Inspection Firewall, Login and Intrusion detection for the linux servers. Using CSF helps to protect servers against many security attacks such as brute force attacking. It comes with a service called (LFD) Login Failure Daemon that prevents unauthorized access to network daemons by watches your user activity for excessive login failures that we want to restrict access by IP address to helps in preventing access to compromise networks daemons. So, whenever there comes a large number of wrong attempts from a specific IP, then that IP will immediately be temporarily blocked from all services on the server.

The ConfigServer Security & Firewall come with lot of features to provide SSH login notifications, excessive connection blocking , mod_security failures, suspicious process reporting and many others.

1) Prerequisites

CSF can be installed on any Linux distribution, bu in this tutorial we are going to install and configure it using CentOS 7.1 .

Login to your Centos 7 server with root user and make sure that you are connected to the Internet to update your system with latest updates and for installing the required dependent packages for CSF.

After login, run the below command for system update.

# yum update

Then to install the perl modules that are required for setting up csf on Centos 7 run the below command.

# yum -y install perl perl-libwww-perl perl-LWP-Protocol-https perl-GDGraph wget unzip net-tools

2) Download CSF Installation Package

To download the ConfigServer Security & Firewall package, run the below command in the /usr/src/ directory as shown.

# wget https://download.configserver.com/csf.tgz

Download CSF Package

After downloading the archived package run the following command to extract this within the same directory.

# tar -xzf csf.tgz

Now change the directory to the extracted folder and use the list command to view its inside configuration and installation scripts as shown.

CSF Installation Package

3) Installing ConfigServer Security Firewall

To start installation of CSF on CentOS 7, we will run the installation script that is present within the same directory as shown above.

Let's run the below command as shown.

# sh install.sh

Starting CSF Installation

The installation script will check for its basic perl modules and root access, then creates a number directories and compile different configurations files and libraries during its installation process as shown below.

*** USE_CONNTRACK Enabled
*** IPV6 Enabled
*** IPV6_SPI set to 1

TCP ports currently listening for incoming connections:
22,5432

UDP ports currently listening for incoming connections:
5353,43539

Note: The port details above are for information only, csf hasn't been auto-configured.

Don't forget to:
1. Configure the following options in the csf configuration to suite your server: TCP_*, UDP_*
2. Restart csf and lfd
3. Set TESTING to 0 once you're happy with the firewall, lfd will not run until you do so

Adding current SSH session IP address to the csf whitelist in csf.allow:
Adding 172.xx.xx.xx to csf.allow only while in TESTING mode (not iptables ACCEPT)
*WARNING* TESTING mode is enabled - do not forget to disable it in the configuration
‘lfd.service’ -> ‘/usr/lib/systemd/system/lfd.service’
‘csf.service’ -> ‘/usr/lib/systemd/system/csf.service’
ln -s '/usr/lib/systemd/system/csf.service' '/etc/systemd/system/multi-user.target.wants/csf.service'
ln -s '/usr/lib/systemd/system/lfd.service' '/etc/systemd/system/multi-user.target.wants/lfd.service'
‘/etc/csf/csfwebmin.tgz’ -> ‘/usr/local/csf/csfwebmin.tgz’

Installation Completed

We can see that before the installation process completes, csf auto-configures the already listening ports including the SSH port on installation and then auto-whitelists the connected IP address where possible on installation.

4) Testing CSF IPTable Modules

Once the installation process is complete, run the below command to test the status of required iptables modules.

# perl /usr/local/csf/bin/csftest.pl

CSF Test

5) CSF Configuration & Usage

To configure the CSF Firewall On CentOS 7 and other Red Hat Enterprise Linux (RHEL) based distributions, the default configuration file can be found in location of "/etc/csf/"

The configuration files include the following number of files as shown in the image.

CSF Configuration Files

To enable the fully functional CSF firewall configure the default csf configuration file with following parameters.

[root@centos-7 csf]# vim csf.conf
TESTING = "0"
:wq!

Now we will specify an email address to report errors from the Login Failure Daemon by making the following configuration changes.

Configuring CSF

After making configuration changes we have to reload the csf services by using the below command so that the configuration changes can take effect.

# csf -r

If you want to check the status of csf service then run the below command.

# service csf status

CSF Service Status

Run the following command for complete overview of all command line options that you use can with csf.

# csf --help

Using CSF

Conclusion

In this article we learned about installation, configuration and usage of ConfigServer Security and Firewall, which is one of the most widely used open source tool freely available for installing on linux platforms. Using this tool we can secure our servers from many threats by using its simple configurations and commands. Its installation process is very simple and its easy to use that's why many organizations prefer to use this tool. We can also use and manage it from graphical user interface which can be accessed after installing the webmin tool by using its available plug-ins.

The post How to Install CSF Firewall on CentOS 7 appeared first on LinOxide.

How to Install Pure-FTPd with TLS on FreeBSD 10.2

$
0
0

FTP or File Transfer Protocol is application layer standard network protocol used to transfer file from the client to the server, after user logged in to the FTP server over the TCP-Network, such as internet. FTP has been round long time ago, much longer then P2P Program, or World Wide Web, and until this day it was a primary method for sharing file with other over the internet and it it remain very popular even today. FTP provide an secure transmission, that protect username, password and encrypt the content with SSL/TLS.

Pure-FTPd is free FTP Server with strong and focus on the software security. It was great choice for you if you want to provide a fast, secure, lightweight with feature rich FTP Services. Pure-FTPd can be install on variety of Unix-like operating system, include Linux and FreeBSD. Pure-FTPd is created by Frank Dennis in 2001, based on Troll-FTPd, and until now is actively developed by a team led by Dennis.

In this tutorial we will provide about installation and configuration of "Pure-FTPd" with Unix-like operating system FreeBSD 10.2.

Step 1 - Update system

The first thing you must do is to install and update the freebsd repository, please connect to your server with SSH and then type command below as sudo/root :

freebsd-update fetch
freebsd-update install

Step 2 - Install Pure-FTPd

You can install Pure-FTPd from the ports method, but in this tutorial we will install from the freebsd repository with "pkg" command. So, now let's install :

pkg install pure-ftpd

Once installation is finished, please add pure-ftpd to the start at the boot time with sysrc command below :

sysrc pureftpd_enable=yes

Step 3 - Configure Pure-FTPd

Configuration file for Pure-FTPd is located at directory "/usr/local/etc/", please go to the directory and copy the sample configuration for pure-ftpd to "pure-ftpd.conf".

cd /usr/local/etc/
cp pure-ftpd.conf.sample pure-ftpd.conf

Now edit the file configuration with nano editor :

nano -c pure-ftpd.conf

Note : -c option to show line number on nano.

Go to line 59 and change the value of "VerboseLog" to "yes". This option is allow you as administrator to see the log all command used by the users.

VerboseLog   yes

And now look at line 126 "PureDB" for virtual-users configuration. Virtual users is a simple mechanism to store a list of users, with their password, name, uid, directory, etc. It's just like /etc/passwd. But it's not /etc/passwd. It's a different file and only for FTP. In this tutorial we will store the list of user to the file "/usr/local/etc/pureftpd.passwd" and "/usr/local/etc/pureftpd.pdb". Please uncomment that line and change the path for the file to "/usr/local/etc/pureftpd.pdb".

PureDB   /usr/local/etc/pureftpd.pdb

Next, uncomment on the line 336 "CreateHomeDir", this option make you easy to add the virtual users, allow automatically create home directories if they are missing.

CreateHomeDir   yes

Save and exit.

Next, start pure-ftpd with service command :

service pure-ftpd start

Step 4 - Adding New Users

At this step FTP server is started without error, but you can not log in to the FTP Server, because the default configuration of pure-ftpd is disabled for anonymous users. We need to create new users with home directory, and then give it the password for login.

On thing you must do befere you add new user to pure-ftpd virtual-user is to create a system user for this, lets create new system user "vftp" and the default group is same as username, with home directory "/home/vftp/".

pw useradd vftp -s /sbin/nologin -w no -d /home/vftp \
-c "Virtual User Pure-FTPd" -m

Now you can add the new user for the FTP Server with "pure-pw" command. For an example here, we will create new user named "akari", so please see command below :

pure-pw useradd akari -u vftp -g vftp -d /home/vftp/akari
Password: TYPE YOUR PASSWORD

that command will create user "akari" and the data stored at the file "/usr/local/etc/pureftpd.passwd", not at /etc/passwd file, so this means is that you can easily create FTP-only accounts without messing up your system accounts.

Next, you must generate the PureDB user database with this command :

pure-pw mkdb

Now restart the pure-ftpd services and try connect with user "akari" :

service pure-ftpd restart

Trying to connect with user akari :

ftp SERVERIP

FTP Connect user akari

NOTE :

If you want to add new user again, you can use "pure-pw" command. And if you want to delete the current user, you can use this :

pure-pw userdel useryouwanttodelete
pure-pw mkdb

Step 5 - Add SSL/TLS to Pure-FTPd

Pure-FTPd supports encryption using TLS security mechanisms. To support for TLS/SSL, make sure the OpenSSL library is already installed on your freebsd system.

Now you must generate new "self-signed certificate" on the directory "/etc/ssl/private". Before you generate the certificate, please create new directory there called "private".

cd /etc/ssl/
mkdir private
cd private/

Now generate "self-signed certificate" with openssl command below :

openssl req -x509 -nodes -newkey rsa:2048 -sha256 -keyout \
/etc/ssl/private/pure-ftpd.pem \
-out /etc/ssl/private/pure-ftpd.pem

FILL ALL WITH YOUR PERSONAL INFO.

Generate Certificate pem

Next, change the certificate permission :

chmod 600 /etc/ssl/private/*.pem

Once the certifcate is generated, Edit the pure-ftpd configuration file :

nano -c /usr/local/etc/pure-ftpd.conf

Uncomment on line 423 to enable the TLS :

TLS     1

And line 439 for the certificate file path :

CertFile       /etc/ssl/private/pure-ftpd.pem

Save and exit, then restart the pure-ftpd services :

service pure-ftpd restart

Now let's test the Pure-FTPd that work with TLS/SSL. I'm here use "FileZilla" to connect to the FTP Server, and use user "akari" that have been created.

Pure-FTPd with TLS SUpport

Pure-FTPd with TLS on FreeBSD 10.2 successfully.

Conclusion

FTP or File Transfer Protocol is standart protocol used to transfer file between users and the server. One of the best, lightweight and secure FTP Server Software is Pure-FTPd. It is secure and support for TLS/SSL encryption mechanism. Pure-FTPd is easy to to install and configure, you can manage the user with virtual user support, and it is make you as sysadmin is easy to manage the user if you have a much user ftp server.

The post How to Install Pure-FTPd with TLS on FreeBSD 10.2 appeared first on LinOxide.

Viewing all 382 articles
Browse latest View live