Saturday, December 19, 2020

Google Chrome GPU process Error

If you are facing gpu process error in your google chrome after upgrade or if the error starts appearing suddenly, this solution should work for you.
To find the error, you need to start the chrome from terminal then only you will be able to see the error.


In regular way, Chrome does not open. You click on the icon and it does not happen anything that's why you are not able to detect the error.


In some cases, chrome starts with half black and half white screen and you cannot type anything because cursor does not appear.

If you are a root user then you need to start the chrome with given command.

google-chrome --disable-gpu --disable-software-rasterizer --no-sandbox

or you can start it using icon too but it is not working so you need to update a line in the file /opt/google/chrome/google-chrome.
Find last line in the file. It should be something like this.

exec -a "$0" "$HERE/chrome" "$@"

Replace the line with

exec -a "$0" "$HERE/chrome" "$@" --disable-gpu --disable-software-rasterizer --no-sandbox

Now click on the chrome icon, it should be opened properly.

Mysql import table error : Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535.

If you are facing this error while importing data in mysql, you need to edit the Engine type of your mysql table.

Error : Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535.

 Solution :

ALTER TABLE sheet4_5 ENGINE=MYISAM;

Django Migration Error - Error : duplicate key value violates unique constraint "auth_permission _pkey"DETAIL: Key (id)=(xxx) already exists.

Python Django Error :

Error : duplicate key value violates unique constraint "auth_permission _pkey"DETAIL: Key (id)=(xxx) already exists.

Solution :

python manage.py sqlsequencereset auth | python manage.py dbshell

Python Error while creating virtualenv 'ImportError: No module named importlib_metadata'

Python Errors while creating virtualenv

ImportError: No module named importlib_metadata

or

AssertionError: importlib-metadata>=0.12; python_version < "3.8" .dist-info directory not found

or

NameError: name 'ModuleNotFoundError' is not defined

Solution :

You need to upgrade pip.

pip install --upgrade --user pip

Once the pip is upgraded, you should not get the error. If you are using pip3 then use pip3 instead of pip in the command.

You can see the solution in the following video.


 

Friday, November 6, 2020

Forward Authorization header from nginx to apache

 If you have configured Apache and Nginx both on your server as some php sites are configured in Apache and some Python/Node/Ruby or other sites are configured in Nginx.

If your primary web server is Nginx and you have proxy passed apache from Nginx, the Authorization header error can a common issue you might face.

'Undefined Index Authorization' error comes when Apache is expecting a Authorization token which is not passed to apache. As your primary web server (Port 80) is nginx and you have proxypassed apache in it. All requests which are received by Nginx first then Nginx transfers them to apache.

But Authorization header has different mechanism because this is a header. Nginx receives it but it does not pass it to the apache as it thinks the header is for it. You need to tell nginx to pass it to apache. Now How will you tell it?

The virtualhost which you have configured in Nginx and proxypassed to Apache, add following lines in it.

proxy_pass_request_headers      on;
proxy_set_header Authorization $http_authorization;
proxy_pass_header  Authorization;

Where 'Authorization' is the name of the header value which has the token which should be passed to the apache.

Now Nginx will pass the Authorization Header Token to Apache and you will not get the similar error again.

Install mssql server in Ubuntu 20.04 Docker Container

 To install the Mssql server 19 in Ubuntu 20.04, you need to follow these steps.

1. Clone the systemd image.

sudo docker run -d --name linuxamination --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-ubuntu:20.04

2. Log into the container

sudo docker exec -it linuxamination bash

3. Run the commands inside the container

apt update

4. Install dependent packages

apt install wget curl sudo software-properties-common gnupg2

5. Add microsoft keys in your apt repository.

sudo wget -qO- https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
sudo add-apt-repository "$(wget -qO- https://packages.microsoft.com/config/ubuntu/18.04/mssql-server-2019.list)"

6. Update the repository

apt update

7. Install Mssql server

sudo apt install mssql-server

8. Configure Mssql server

/opt/mssql/bin/mssql-conf setup

While configuration, You need to select Express (Free) option if you do not have License key for Mssql server.

Then you need to accept the License terms and add a password for your mssql server.

If you get following error while configuring Mssql server

Ubuntu Docker Container Error - System has not been booted with systemd as init system (PID 1). Can't operate

 Then you need to follow this tutorial from the beginning, you will not get this error. If you have launched a container from your existing Ubuntu Image and now you are following rest of the steps from this tutorial then you might get this error. 

Take a look at here for the solution.


 

http://linuxamination.blogspot.com/2020/11/ubuntu-docker-container-error-system.html

or if you need complete guide of solution, follow this.


 

9.Now you need to install mssql tools, run the command in your container.

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
curl https://packages.microsoft.com/config/ubuntu/19.10/prod.list > /etc/apt/sources.list.d/mssql-release.list

10. Update the repository

sudo apt update 
sudo ACCEPT_EULA=Y apt install mssql-tools unixodbc-dev
echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
source ~/.bashrc

11. Now Connect to MS SQL console using command.

sqlcmd -S 127.0.0.1 -U SA 

Enter the password you chose while configuration. Now you are on mssql command line.

12. Create database

1> create database mydb;

13. Get a list of databases:

1> select name from sys.databases;
2> go

You can see the solution in the following video.





System has not been booted with systemd as init system (PID 1). Can't operate - Docker Error

If you are starting a service in a docker container using systemctl command or you are configuring any service and you are getting above error, then you should check the output of the command 

ps aux

in your docker container. If PID 1 process is not systemd then this is the issue.


In above image, PID  1 process is bash because I launched the container using bash command.

While launching your container, you might have started the container with command bash or some other command. You should have launched the container with systemd command.

If you launch a container using systemd and your container is stopped after sometimes and you are not able to run it again, it means the image from which you are launching a container is not created for PID 1 systemd container.

You need an image which is created to handle such issue.

Solution :

Pull following image from docker repository.

docker run -d --name linuxamination --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-ubuntu:20.04

I needed ubuntu 20.04 for my service, if you need Ubuntu 18.04 or Ubuntu 16.04, you can simply replace 20.04 with 18.04 or 16.04 in above command and it will pull the requested image.

Once you run above command, you do not need to launch container from this image as it is already launched and running. You need to log into the container using following command.

docker exec -it linuxamination bash

Once you are inside the container, your PID 1 prcess will be systemd

Now if you run the systemctl command or configure any service you will not get the same error again which you were getting before.

Systemd Error solution for different Docker Images :

𝐚) 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗨𝗯𝘂𝗻𝘁𝘂
𝐏𝐮𝐥𝐥 𝐈𝐦𝐚𝐠𝐞 𝐜𝐨𝐦𝐦𝐚𝐧𝐝 :
docker run -d --name Linuxamination --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-ubuntu:20.04
𝐋𝐨𝐠 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫 :
docker exec -it Linuxamination bash

𝐛) 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗖𝗲𝗻𝘁𝗢𝗦
𝐏𝐮𝐥𝐥 𝐈𝐦𝐚𝐠𝐞 𝐜𝐨𝐦𝐦𝐚𝐧𝐝 :
docker run -d --name linuxaminationC8 --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro alekseychudov/centos8-systemd
𝐋𝐨𝐠 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫 :
docker exec -it linuxaminationC8 bash

𝐜) 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝐃𝐞𝐛𝐢𝐚𝐧
𝐏𝐮𝐥𝐥 𝐈𝐦𝐚𝐠𝐞 𝐜𝐨𝐦𝐦𝐚𝐧𝐝 :
sudo docker run -d --name systemd-debian --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-debian:11
𝐋𝐨𝐠 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫 :
sudo docker exec -it systemd-debian bash

𝐝) 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗳𝗲𝗱𝗼𝗿𝗮
𝐏𝐮𝐥𝐥 𝐈𝐦𝐚𝐠𝐞 𝐜𝐨𝐦𝐦𝐚𝐧𝐝 :
sudo docker run -d --name systemd-fedora --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-fedora
𝐋𝐨𝐠 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫 :
sudo docker exec -it systemd-fedora bash

𝗲) 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗥𝗲𝗱𝗵𝗮𝘁 𝗟𝗶𝗻𝘂𝘅
𝐏𝐮𝐥𝐥 𝐈𝐦𝐚𝐠𝐞 𝐜𝐨𝐦𝐦𝐚𝐧𝐝 :
sudo docker run -d --name linuxamination --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro registry.access.redhat.com/ubi8/ubi-init:8.1
𝐋𝐨𝐠 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫 :
sudo docker exec -it linuxamination bash

𝐟) 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝐀𝐥𝐦𝐚𝐋𝐢𝐧𝐮𝐱
𝐏𝐮𝐥𝐥 𝐈𝐦𝐚𝐠𝐞 𝐜𝐨𝐦𝐦𝐚𝐧𝐝 :
sudo docker run -d --name almalinuxamination --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro almalinux/8-init
𝐋𝐨𝐠 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫 :
sudo docker exec -it almalinuxamination bash

You can see the solution in the following video.




Monday, October 19, 2020

fail2ban to secure ssh server

SSH is a network protocol for operating network services securely over an unsecured network but there is a word secure in the name, it doesn't mean it cannot be broken. Admins need to configure it securely.

If you are forced to provide ssh access globally, I would suggest to not use passwords as a login method. RSA keys are more secured way to login and it is hard to take unauthorized access of the server.

But if there is a case and you cannot use RSA keys as a login method, you should install fail2ban on your server. It protects you in certain ways to prevent unauthorized access.

If you are under impression that no body is trying to log into your server, you can try this on dummy server. Open 22 port for ssh login and check /var/log/auth.log or /var/log/secure after an hour or two, you will find uncountable number of ssh login requests. People are trying very hard from every point of the earth to get inside of your server.

Here is the way to secure your server, if your ssh login method is password.

1) Install fail2ban on your linux system.

apt install fail2ban
or
yum install fail2ban

2) Configure jail.local

nano /etc/fail2ban/jail.local
	[DEFAULT]
	 ignoreip = 127.0.0.1/8 ::1
	 bantime = 7200
	 findtime = 900
	 maxretry = 5
	[sshd]
	 enabled = true
     
service fail2ban restart

Now fail2ban is configured on your server. Now you want to know, is it working or not? Your concern is valid. Read following steps.

A) If you want to know number of jails you have created in fail2ban, here is the command. Check number of jails.

sudo fail2ban-client status

In above configuration, we created only one jail, so it will list only one jail i.e. 'sshd'. 

B) Now you want to check, how many IPs have been blocked. It will show you total number of blocked IPs as well as the list of IPs which are in blocked status currently. Check status of current blocked IPs

sudo fail2ban-client status <jailname>
sudo fail2ban-client status sshd

You gave bantime 7200 in your config, it means it will block an IP for 2 hours if failed login attempts are 5 or more. You can reduce failed number of login attempts and increase bantime depends on your requirement.

C) If you want to block an IP or a whole IP range manually, here is the command, Block ip or ip range manually.

sudo fail2ban-client -vvv set sshd banip 141.98.10.0/24
sudo fail2ban-client -vvv set sshd banip 222.141.207.246

First command will block an IP range from 141.98.10.0 to 141.98.10.255. It includes all 256 IPs. Second command blocks only one IP i.e. 222.141.207.246

D) If you think, nobody wants to log into your server as those files are useless for them, try below command. If you put just one blank file a.txt in your server and if people will get access of your server, they will write in the file that how many bitcoins they want or they will simply remove the file with other OS files which can be removed by your user.

Check all fail login attempts

 cat /var/log/auth.log | grep rhost 
SSH unauthorized access can be a biggest damage for you and your server. Do not take it lightly.
1) Always use key login method for your ssh user and do not open port 22 globally.
2) If you are forced to open it globally, use key login method and that is too RSA only.
3) If you are forced to use password login method with global access, then you must choose super strong password for your user and configure fail2ban.

Yandex New Account - Enable 'Create Organization' Option to add multiple Organizations under one account

If you have registered a new account on Yandex to connect your domain to Yandex mail, you might face an issue to add multiple organizations under one account.
You are not able to find Organization dropdown in your Admin Tools of Yandex Account. Here are the steps to active Organization dropdown in your Yandex Account.

1) Register new account on Yandex
https://connect.yandex.com > Try out > Register
or
https://passport.yandex.com/registration

2) After registration and log into new account, you can add your domain in the Domains section.

3) Once it is verified and if you want to add another domain under same account. You should add it as a different organization.

If you add under same organization, it may create issue. Best option is you can create another organization but this option may not be visible in new Yandex account.

4) To make it visible, connect.yandex.com > login > Admin Tools > Your username/image icon at top right > Add new business
https://connect.yandex.com/portal/registration?action=add&source=connect&preset=&retpath=https%3A%2F%2Fconnect.yandex.com%2Fportal%2Fhome
Click on Create an organization.

5) Now new organization will be created with new organization id which can be renamed in the profile section.
Similar way you can create as many organizations upu want and you do not need 'top left menu' option to create new organization.

Lex chatbot is not rendering html on web page

If you have integrated Lex chatbot in your website and it is showing html as a response message, it means your javascript is not able to render the html on the web page.

You need to handle this issue at the client side as it is correct method to send html as a response message from Lex.

If you are using default javascript code which you found in AWS blog for your Lex chatbot integration, you need to modify it little.

Find the following code in javascript function showResponse(lexResponse)

and replace following piece of code

	    if (lexResponse.message) {
	responsePara.appendChild(document.createTextNode(lexResponse.message));
	responsePara.appendChild(document.createElement('br'));
    }

with

    	if (lexResponse.message) {

        var message = lexResponse.message.replace(/"/g, '\'');
        responsePara.innerHTML = message;
        responsePara.appendChild(document.createElement('br'));
    }

Now you can add html markups in the message section of Lex chatbot so it will return this message when the 'Utterance' will be matched and It will render the html on the web page.

jenkins error - Failed to connect to repository : Command 'git ls-remote -h'

If you are adding a repository in 'Source Code Management' section of Jenkins and after adding correct username and password as a credentials, it is still showing 'Failed to connect to repository' error then this solution might work for you.

Solution :

check the character '@' in your username and password. I would suggest you to not use email as your git username. To remember username, people use complete email as a username. If your email is john.doe@mail.com, I would suggest you to not use 'john.doe@mail.com' as a username. You can use john.doe or if it is not available, you can use any number after it.

Similarly you should not use '@' in the password too.

After replacing '@' with other special character like '_', you should try to connect git repository from Jenkins again.


Sunday, September 13, 2020

ftp connection error on command line

500 I won't open a connection to 172.31.xx.xxx (only to 19.216.xxx.xxx)ftp: bind: Address already in use

If you are trying to make a connection to ftp on command line and you are getting above error, it means ftp is trying to connect to the server using private IP but it cannot connect to remote server using private IP as it needs public IP. You have already passed public IP but still it is connecting to private IP and showing error 500. Here is solution for you.

Soluton :

Connect to ftp server in passive mode using -p

ftp -p ftp.domainname
or
ftp -p 19.216.xxx.xxx
It will ask username and password and after typing it, you should be able to connect to remote server successfully.

bitbucket pull/push error - gnutls_handshake() failed: Handshake failed

bitbucket gnutls handshake failed error :

If you are using ubuntu 14.04 or older and recently you start getting the error while pulling/pushing into git repository. All was working fine and suddently it stopped working.

You are not alone who are facing the issue. bitbucket removed support for libcurl3 version because of vulnerability. To use bitbucket, you should have libcurl4 installed in your system.

Solution :

Add following lines in the file /etc/apt/sources.list


deb http://security.ubuntu.com/ubuntu xenial-security main
deb http://cz.archive.ubuntu.com/ubuntu xenial main universe
Run :

sudo apt-get update && sudo apt-get install curl
If you are not able to install libcurl4 because of python dependencies errors on your Ubuntu machine. Follow approved solution on this link of stackoverflow and then follow above solution to install libcurl4.

 

tomcat error - tomcat authenticated - An attempt was made to authenticate the locked user "user"

You are getting 'Permission Denied' for some of the modules of your tomcat web application and you get error 'An attempt was made to authenticate the locked user "user"' in the log, here is the solution for you.

Solution :

The 'work' directory inside tomcat does not have enough permission to open the jsp files in the browser. You need to provide permission to 'work' directory. 'Owner' and 'Group' of the 'work' directory should be 'tomcat'

chown -R tomcat:tomcat /usr/share/tomcat/work
Your tomcat directory path may be different, it may be in /opt or in /var/lib/tomcat. Find your tomcat directory and provide it correct permission.

Ubuntu pip install error - __main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory: 'curl-config'

 Pip install error in ubuntu

__main__.ConfigurationError: Could not run curl-config: [Errno 2] No such file or directory: 'curl-config'

Solution :

You are missing one package in your Ubuntu, Install

sudo apt install libcurl4-openssl-dev libssl-dev

AWS S3 billing for 'Amazon Simple Storage Service APS3-TimedStorage-ByteHrs'

If you are getting some biils for S3 resource although there is no enough data in your bucket according to you. When you check the bill, it charges you for some GB data in your s3 bucket every month but you do not remember any GB data in your bucket.

I would suggest you to check bucketobjectsize metric in s3. Open your s3 bucket and go in Management > Metrices . here you can see all occupied size of your bucket. You can open this metric in cloud using 'View in cloud' option. In the graph you can see the graph bucketsize increment over the time. This is very helpful to analyze your bucket filling capacity over time.

If you are using your s3 bucket to keep back up data, please make sure you remove older data frequently. This may be the reason your bucket is occupying too much size.

If you have used cron to put data in the bucket and same cron is used to remove data after X days of time, I would sugest you to check 'Versioning' option in the properties. If 'Versioning' is on, even if you remove the data using cron, still it keeps one copy of it and this may be the reson your bucket is showing this much occupied space.

Open s3 bucket in aws management console and enable 'show' toggle button to see all hidden data, remove this data and your bucket will be back with the expected occupied size.


Saturday, August 8, 2020

Connect to Oracle database using command line on Ubuntu 16.04

1. Download Oracle client package (instantclient-basiclite-linux.x64-19.8.0.0.0dbru.zip) and sqlplus commandline client package (instantclient-sqlplus-linux.x64-19.8.0.0.0dbru.zip) from here
Your may have latest versions, download them.

2. Now run commands
mkdir -p /opt/oracle
cd /opt/oracle

3. Unzip downloaded package instantclient-basiclite-linux.x64-19.8.0.0.0dbru.zip
unzip instantclient-basiclite-linux.x64-19.8.0.0.0dbru.zip
and move folder instantclient_19_8 in /opt/oracle

4. Install libaio package
sudo apt install libaio-dev
5. Run Commands
sudo sh -c "echo /opt/oracle/instantclient_19_8 > /etc/ld.so.conf.d/oracle-instantclient.conf"
  
sudo ldconfig

or
export LD_LIBRARY_PATH=/opt/oracle/instantclient_19_8:$LD_LIBRARY_PATH
6. Unzip downloaded package instantclient-sqlplus-linux.x64-19.8.0.0.0dbru.zip
unzip instantclient-sqlplus-linux.x64-19.8.0.0.0dbru.zip
Copy all .so files of these extracted zip into /opt/oracle/instantclient_19_8
mv *.so /opt/oracle/instantclient_19_8
7. Now connect to oracle db on command line.
./sqlplus username/password@domain:port/service_name
Example :
./sqlplus root/strongpassword@54.24.xxx.xxx:1532/ORCAD
8. After connect to sql prompt, list all databases using query
select * from v$database;
Query to view the schema name by
select * from dba_users;
To display the columns available in the Oracle view v$database, we can execute the below statement.
desc v$database;

Error : pip OSError: mysql_config not found

While installing mysqlclient package from pip, you may get above error.

Package name could be like this in requirements.txt.

mysqlclient @ git+https://github.com/PyMySQL/mysqlclient-python.git@ca630c01fb39f252a4c91a525e440beea4ac4447

Solution :

sudo apt-get install libmysqlclient-dev
For recent versions of debian/ubuntu (as of 2018) it is
sudo apt install default-libmysqlclient-dev

Access denied; you need (at least one of) the SUPER or SET_USER_ID privilege(s)

While importing mysql database if you get the above error, it means the Definer is not same as the user which you are using to import the database.

If Definer is set as root in you sql file, either import the database using root user or you need to change the Definer in sql file with the username which you are using to import database.

You can find a line like this in your sql file

DEFINER = 'root'@'%'
or
DEFINER = 'root'@'localhost'

or

DEFINER = 'root'@'your-mysql-hostname'

You need to change root with your mysql user.

List all active virtualhosts of apache in Linux

If you want to list all virtualhosts of apache in Linux, here is the command

sudo apache2ctl -S

It will list all the active virtualhosts with port number. You can track easily which hosts are running on port 80 and which hosts are ssl enabled and running on port 443. 

It shows the confguration file path with name of the virtualhost, it helps user to do the required modifications.

`apache2ctl -S` is better than a2query command as it finds all the active virtualhosts in all apache config files whether it is sites-enabled or some other files.

If a virtualhost is hidden in the non-default config files, it can be easily found using above command.

Now how can you hide a apache virtualhost ?

Apache config files have preferences. If a virtual host is created in sites-enabled config file and same virtualhost with same ServerName but different DocumentRoot is created in mods-enabled config file, mods-enabled config file virtualhost will be activated as mods-enabled config file has higher preference over sites-enabled config file because its Includeoptional entry appears first in the file apache2.conf.

So if you create a virtualhost in the file /usr/src/core/base.conf and include this file at the end in the file /etc/apache2/mods-enabled/proxy.conf

IncludeOptional ../../usr/src/core/*.conf

and same virtualhost with same ServerName but different DocumentRoot is created in regular virtualhost file sites-enabled/000-default.conf, it will be hard to detect the actual virtualhost conf file location and DocumentRoot path of the project without using command `apache2ctl -S`

This was just one example, it can be created in more complex way to hide the virtualhosts and project directory path to show you the wrong application directory. It may amaze you why your project changes are not reflected or while taking backup, you can take backup of wrong directory if you are not careful enough.

That's why you should be updated with all apache tricks so no one can fool you while handing over the project.


Monday, July 6, 2020

Python reverse proxy nginx : [error] upstream timed out (110: Connection timed out) while reading response header from upstream

Generally python server script execution time does not bother too much and server persons do not struggle with them like apache/php but  sometimes similar conditions may arrive.

Developers do not use any reverse proxy web server like nginx or apache to resolve the port on the domain. They access the site with localhost and port. Here they do not get 'connection time out' 'bad gateway' or similar issues. They see the error on command line and start fixing them.

But reverse proxy web servers do not log 'server side language error', they show the error in the format of internal server error like 501, 502, 503 etc

In above error Python django or any other framework script takes too long to complete but developer do not get similar error on localhost as there is no parameter which halts the script the execution. This is real headache of server person.

In this scenario, script execution time needs to increase in web server.

Solution :
Open virtual host in the nginx and add the parameters under vrtualhost of domain.

Add under location /
 proxy_connect_timeout       3600;
 proxy_send_timeout          3600;
 proxy_read_timeout          3600;
 send_timeout                3600;


The time is in second.
Complete snippet of 'location /'should look like this.
  location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;
        proxy_pass         http://127.0.0.1:8000/;
        proxy_set_header X-Forwarded-Proto $scheme;
                proxy_hide_header X-Powered-By;
                server_tokens off;
                autoindex off;
                etag off;
  proxy_connect_timeout       6000;
  proxy_send_timeout          6000;
  proxy_read_timeout          6000;
  send_timeout                6000;
 }

Restart nginx.

Now timeout error should not be there again.


phpmyadmin https error on login page

There is mismatch between HTTPS indicated on the server and client.
This can lead to non working phpMyAdmin or a security risk.
Please fix your server configuration to indicate HTTPS properly.

You are getting the error on login page of phpmyadmin as you are trying to open phpmyadmin url with domain which is https but you are using proxypass for the domain which may be using http url in proxypass.

Solution :
Add following line in your apache virtualhost of the domain
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
Restart apache
or
If you are using nginx
Add following line in your nginx virtualhost of the domain
proxy_set_header X-Forwarded-Proto https;
Restart nginx

Thursday, June 11, 2020

Windows 10 and Ubuntu 20.04 Dual Boot - Screen distorted when booting OS

If you have Ubuntu and Windows dual boot system and often your screen gets distorted while booting into the Windows 10.


Windows is booting in background and you see above screen.

Here is the solution for you.
1. Boot into Ubuntu.
2. Edit file /etc/default/grub in ubuntu 20.04
and uncomment following line
GRUB_TERMINAL="console"
3. Now run command
sudo update-grub
4. Now boot into windows 10.

Boot selection screen will be visible without any graphical effect and above issue will be resolved.









Ubuntu 20.04 and Windows 10 Dual Boot - Read Only Filesystem Error in Ubuntu for NTFS Partitions

If you have installed Windows and Ubuntu both using dual boot method, There might be one issue which is a real headache.

Read Only Filesystem Error for NTFS Drives.

When you try to remove any file, create any file or update anything in any directory of the NTFS partition in Ubuntu, it prevents you to do so. If you reboot your system into Windows and operate same operations there, it does not prevent you but rebooting again into Ubuntu, same issue is occurred again. This can be really frustrating.

Here are some of the solutions of this problem.

1. Never Hibernate, Sleep or Suspend of Windows system, if your primary system is Ubuntu. If you use Windows only for gaming purpose or create/update files in MS Office, it means your frequently use Ubuntu but for any rare work, you use windows. if this is the case then I would like to suggest you, do not susped or hibernate your windows system, this may be the reason of your read only file system error.
To fix it, Restart Windows properly and boot into Ubuntu system. The issue should not be there.

2. If reboot didn't solve your issue, you can use ntfsfix command in ubuntu system.
ntfsfix /dev/sdaX
where /dev/sdaX is your NTFS partition name, it can be /dev/sda2 or /dev/sda3
Find the device name of your partition and ntfsfix can solve the issue.

3. If ntfsfix is giving you error to run chkdsk command
Volume is Corrupt. You should run chkdsk
Boot into windows. Open CMD as an administrator and run
chkdsk C:
chkdsk E:
where C & E are the drives where you are getting ReadOnly Filesystem error in Ubuntu.

4. If you are always shutting down your windows properly, no hibernation, no suspension but still every now and then you are getting readOnly Filesystem error in NTFS partitions on Ubuntu, here is a fix for you.
By Default, windows Fast StartUp is enabled and this is kind of hibernation. this is the reason, your NTFS drives are giving read only error. To disable the option follow,
Settings > Power and Sleep > Advanced Settings >  Option 'Choose what closing the lid does' or 'Choose what power button does' 
> Click on 'Change settings that are currently unavailable' > Uncheck the checkbox 'Turn On Fast startUp (Recommended)'
Save the Settings.

Now you should not get ReadOnly Filesystem error again.

Sunday, May 17, 2020

Elastic search backup - Import Export

If you need to migrate Elasticsearch on another server with data, this may be a challenge for you if you have not done this already.

Before reaching here, you might have done various things including installation of Elasticsearch on new server. Now you want to import all existing data from old server to new server. Check your Elasticsearch version on old server, have you installed near same version on new server?

Suppose you had 7.x version on your old server, any version of 7 will work on new server too. Elasticsearch tutorial says, you can migrate 6.x version data on 7.x, however I have not tried it before.

I had 7.6 version on old server and I add 7.x in apt/sources.list of new server, I got 7.7 on new server and migration worked pretty well.

There is a method which is approved by elasticsearch itself to migrate data and it is based on creating snapshot and restoring them. I have not used it in the tutorial as there are several tutorial available for it. I have used different method to achieve same task.

Steps :
1) First list all indices on old server
curl -L localhost:9200/_cat/indices
If your elasticsearch server has http authentication password, you need to pass it with curl.

This is my output

green    open    .apm-custom-link        NQlqIQaCwO_S8jWYSZA    1 0    0 0     208b     208b
yellow    open    kibana_sample_data_ecommerce    eMXnWTJylltrthTGqgw    1 1  4675 0    4.3mb    4.3mb
green    open    .kibana_task_manager_1        g7hUKicRTHufkJ65Hlg    1 0     5 3   33.6kb   33.6kb
yellow    open    conzexant_index5        HXQXmMBaBgcUxzrWORw    1 1     1 0    3.5kb    3.5kb
yellow    open    conzexant_other_entities    ZseVzSv6RaCxAMwn48w    1 1     1 0   99.1kb   99.1kb
green    open    .apm-agent-configuration    rcZAEX0NSDVbYJtvi8g    1 0     0 0     208b     208b
yellow    open    conzexant_index1        Ne_S7UQSQuyyXwFgb_Q    1 1 13916 0   28.2mb   28.2mb
yellow    open    conzexant_index2        leoZmPUqQoekP9DhIYQ    1 1 14355 0   68.3mb   68.3mb
green    open    .kibana_1            _U1SYeCKT5aGXgTldvA    1 0   123 4 1005.7kb 1005.7kb


The indices which I need to copy are kibana_sample_data_ecommerce,  conzexant_index5, conzexant_index1, conzexant_index2 and conzexant_other_entities

In the output of curl, third column is indices name, if you run same command on new server, you will get default list of indices. While comparing indices name with old server you can easily find the list of indices what you need on new server.

2) Install elasticdump. Now here you need to verify the access, are you able to access elasticsearch from old server to new server or new server to old server using curl command. If you can do both then you can install elasticdump anywhere, if you access in one way, you need to install it there only.
If you can not access from anywhere then there is method B, export in json and import, we will talk about it shortly.

Suppose you can access elasticsearch of new server from old server, you need to log into the old server and install elasticdump.
Install node version greater than 8. npm will be installed with it.
Now install elasticdump.
npm install elasticdump
If you have run this command in your home directory, it is installed inside node_modules folder.

3) Now export and import all indices on new server using following command.
a) First Dump Analyzer on new server
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=http://newserverIP:9200/index_name --type=analyzer
Here index_name is your index name which you want to migrate on new server.
In my case the command was
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/conzexant_index5 --output=http://newserverIP:9200/conzexant_index5 --type=analyzer
New server IP was 35.34.xxx.xxx

b) then Dump Mapping on new server
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=http://newserverIP:9200/index_name --type=mapping
In my case the command was
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/conzexant_index5 --output=http://newserverIP:9200/conzexant_index5 --type=mapping
c) and finally Dump Data
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=http://newserverIP:9200/index_name --type=data
Similarly I dumped data with index name.

I have considered here, there is no authentication on elastic search (which is very bad) but if there is authentication, you need to pass the credentials with the command.

4) Once it is done you can verify data on new server. Log into the new server. Run.
curl -L localhost:9200/_cat/indices
Imported index will be listed on new server with all data. Similarly you need to import rest of the indices.

Method : B
If you can not access new server from old server or vice-versa, you need to export data in json on old server and then you need to import it in new server.
In this method, you need to install elasticdump on both servers.

Log into the old server
Export :
Export Analyzer :
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=index_name-analyzer.json --type=analyzer
Export Mapping :
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=index_name-mapping.json --type=mapping
Export Data :
~/node_modules/elasticdump/bin/elasticdump --input=http://localhost:9200/index_name --output=index_name-data.json --type=data
Copy all three json files on new server and log into the new server using ssh.
cd into the directory where json files are stored, now import all one by one.
Import :
Import Analyzer :
~/node_modules/elasticdump/bin/elasticdump --input=index_name-analyzer.json --output=http://localhost:9200/index_name --type=analyzer
Import Mapping :
~/node_modules/elasticdump/bin/elasticdump --input=index_name-mapping.json --output=http://localhost:9200/index_name --type=mapping
Import Data :
~/node_modules/elasticdump/bin/elasticdump --input=index_name-data.json --output=http://localhost:9200/index_name --type=data
This is the process to import and export data of one index, similarly you can export and import data of all indexes.

You can write a shell script to export all indeces from old server and import all into new server.

Note : Elasticsearch also offers snapshot creation and Restore method to migrate data. This is just another method to give you same result.

Docker error : debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) debconf: delaying package configuration, since apt-utils is not installed

debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module)
debconf: delaying package configuration, since apt-utils is not installed

Solution :

Install in the container
sudo apt-get install apt-utils
and Run command
echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections


apt- repository error

W: GPG error: http://ppa.launchpad.net/ondrej/php/ubuntu xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY

If you get this error while installing a Ubuntu package, you must have added a package in the apt repository before but it is not able to verified the signature as  public key is not available. Here is the solution for you.

There must be a key after the text 'NO_PUBKEY' in the error, copy the key and run the command

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys <PUBKEY>

Here use the key in the command instead of <PUBKEY>

Your command should be like

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 4F4EA0AAE5267A6C

Now try to install the package again, you should not get above error again.


Saturday, May 16, 2020

Codeigniter 404 error - Application is migrated from windows to linux

If you have migrated your Codeigniter project from Windows to Linux and it is giving error 404 on Linux system but same project was working fine on Windows, there may be case sensitivity issue in your filenames because Linux treats file code.php and Code.php of same folder differently .

Solution :
Change first letter of filename in uppercase of all files of models and controllers both.
Folder name should be started with small letter. But filenames of the folders should be started with capital letter.

models :
./Admin.php
./Employee.php
./Emails.php
./Contractor.php
./Index.html
./User.php
./users
        ./users/User.php
        ./users/Email.php
./employees
         ./employees/User.php
        ./employees/Email.php

controllers:
./Admincontroller.php
./Employeecontroller.php
./Mailcontroller.php
./Index.html
./Users.php
./users
        ./users/Usercontroller.php
        ./users/Emailcontroller.php
./employer
         ./employer/Employercontroller.php
        ./employer/Employer.php

Start npm start in background

If nohup is giving error to start npm, here is another way to start the npm in the background.

First install forever globally.
npm install -g forever
cd into the project
cd /project/path/
Run forever command.
forever start -c "npm start" ./