Sunday, May 17, 2020

Elastic search backup - Import Export

If you need to migrate Elasticsearch on another server with data, this may be a challenge for you if you have not done this already.

Before reaching here, you might have done various things including installation of Elasticsearch on new server. Now you want to import all existing data from old server to new server. Check your Elasticsearch version on old server, have you installed near same version on new server?

Suppose you had 7.x version on your old server, any version of 7 will work on new server too. Elasticsearch tutorial says, you can migrate 6.x version data on 7.x, however I have not tried it before.

I had 7.6 version on old server and I add 7.x in apt/sources.list of new server, I got 7.7 on new server and migration worked pretty well.

There is a method which is approved by elasticsearch itself to migrate data and it is based on creating snapshot and restoring them. I have not used it in the tutorial as there are several tutorial available for it. I have used different method to achieve same task.

Steps :
1) First list all indices on old server
curl -L localhost:9200/_cat/indices
If your elasticsearch server has http authentication password, you need to pass it with curl.

This is my output

green    open    .apm-custom-link        NQlqIQaCwO_S8jWYSZA    1 0    0 0     208b     208b
yellow    open    kibana_sample_data_ecommerce    eMXnWTJylltrthTGqgw    1 1  4675 0    4.3mb    4.3mb
green    open    .kibana_task_manager_1        g7hUKicRTHufkJ65Hlg    1 0     5 3   33.6kb   33.6kb
yellow    open    conzexant_index5        HXQXmMBaBgcUxzrWORw    1 1     1 0    3.5kb    3.5kb
yellow    open    conzexant_other_entities    ZseVzSv6RaCxAMwn48w    1 1     1 0   99.1kb   99.1kb
green    open    .apm-agent-configuration    rcZAEX0NSDVbYJtvi8g    1 0     0 0     208b     208b
yellow    open    conzexant_index1        Ne_S7UQSQuyyXwFgb_Q    1 1 13916 0   28.2mb   28.2mb
yellow    open    conzexant_index2        leoZmPUqQoekP9DhIYQ    1 1 14355 0   68.3mb   68.3mb
green    open    .kibana_1            _U1SYeCKT5aGXgTldvA    1 0   123 4 1005.7kb 1005.7kb


The indices which I need to copy are kibana_sample_data_ecommerce,  conzexant_index5, conzexant_index1, conzexant_index2 and conzexant_other_entities

In the output of curl, third column is indices name, if you run same command on new server, you will get default list of indices. While comparing indices name with old server you can easily find the list of indices what you need on new server.

2) Install elasticdump. Now here you need to verify the access, are you able to access elasticsearch from old server to new server or new server to old server using curl command. If you can do both then you can install elasticdump anywhere, if you access in one way, you need to install it there only.
If you can not access from anywhere then there is method B, export in json and import, we will talk about it shortly.

Suppose you can access elasticsearch of new server from old server, you need to log into the old server and install elasticdump.
Install node version greater than 8. npm will be installed with it.
Now install elasticdump.
npm install elasticdump
If you have run this command in your home directory, it is installed inside node_modules folder.

3) Now export and import all indices on new server using following command.
a) First Dump Analyzer on new server
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=http://newserverIP:9200/index_name --type=analyzer
Here index_name is your index name which you want to migrate on new server.
In my case the command was
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/conzexant_index5 --output=http://newserverIP:9200/conzexant_index5 --type=analyzer
New server IP was 35.34.xxx.xxx

b) then Dump Mapping on new server
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=http://newserverIP:9200/index_name --type=mapping
In my case the command was
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/conzexant_index5 --output=http://newserverIP:9200/conzexant_index5 --type=mapping
c) and finally Dump Data
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=http://newserverIP:9200/index_name --type=data
Similarly I dumped data with index name.

I have considered here, there is no authentication on elastic search (which is very bad) but if there is authentication, you need to pass the credentials with the command.

4) Once it is done you can verify data on new server. Log into the new server. Run.
curl -L localhost:9200/_cat/indices
Imported index will be listed on new server with all data. Similarly you need to import rest of the indices.

Method : B
If you can not access new server from old server or vice-versa, you need to export data in json on old server and then you need to import it in new server.
In this method, you need to install elasticdump on both servers.

Log into the old server
Export :
Export Analyzer :
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=index_name-analyzer.json --type=analyzer
Export Mapping :
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=index_name-mapping.json --type=mapping
Export Data :
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=index_name-data.json --type=data
Copy all three json files on new server and log into the new server using ssh.
cd into the directory where json files are stored, now import all one by one.
Import :
Import Analyzer :
~/node_modules/elasticdump/bin/elasticdump --input=index_name-analyzer.json --output=http://localhost:9200/index_name --type=analyzer
Import Mapping :
~/node_modules/elasticdump/bin/elasticdump --input=index_name-mapping.json --output=http://localhost:9200/index_name --type=mapping
Import Data :
~/node_modules/elasticdump/bin/elasticdump --input=index_name-data.json --output=http://localhost:9200/index_name --type=data
This is the process to import and export data of one index, similarly you can export and import data of all indexes.

You can write a shell script to export all indeces from old server and import all into new server.

Note : Elasticsearch also offers snapshot creation and Restore method to migrate data. This is just another method to give you same result.

Docker error : debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) debconf: delaying package configuration, since apt-utils is not installed

debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module)
debconf: delaying package configuration, since apt-utils is not installed

Solution :

Install in the container
sudo apt-get install apt-utils
and Run command
echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections


apt- repository error

W: GPG error: http://ppa.launchpad.net/ondrej/php/ubuntu xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY

If you get this error while installing a Ubuntu package, you must have added a package in the apt repository before but it is not able to verified the signature as  public key is not available. Here is the solution for you.

There must be a key after the text 'NO_PUBKEY' in the error, copy the key and run the command

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys <PUBKEY>

Here use the key in the command instead of <PUBKEY>

Your command should be like

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 4F4EA0AAE5267A6C

Now try to install the package again, you should not get above error again.


Saturday, May 16, 2020

Codeigniter 404 error - Application is migrated from windows to linux

If you have migrated your Codeigniter project from Windows to Linux and it is giving error 404 on Linux system but same project was working fine on Windows, there may be case sensitivity issue in your filenames because Linux treats file code.php and Code.php of same folder differently .

Solution :
Change first letter of filename in uppercase of all files of models and controllers both.
Folder name should be started with small letter. But filenames of the folders should be started with capital letter.

models :
./Admin.php
./Employee.php
./Emails.php
./Contractor.php
./Index.html
./User.php
./users
        ./users/User.php
        ./users/Email.php
./employees
         ./employees/User.php
        ./employees/Email.php

controllers:
./Admincontroller.php
./Employeecontroller.php
./Mailcontroller.php
./Index.html
./Users.php
./users
        ./users/Usercontroller.php
        ./users/Emailcontroller.php
./employer
         ./employer/Employercontroller.php
        ./employer/Employer.php

Start npm start in background

If nohup is giving error to start npm, here is another way to start the npm in the background.

First install forever globally.
npm install -g forever
cd into the project
cd /project/path/
Run forever command.
forever start -c "npm start" ./

ssh - Disable password login and allow login only using key file

If you allow password login for your ssh server, a Brute Force attack can be originated on your server. Key file login is more secured and it reduces possibilities of server hack. Best practice is you should disable password login for your server.

Here are steps to disable password login.

Update the line in the file /etc/ssh/sshd_config. The line might be commented, uncomment the line and change the attribute to no
PasswordAuthentication no
Restart ssh service.