Tuesday, December 17, 2019

AWS RDS - Update Your Amazon RDS SSL/TLS Certificates by February 5, 2020

If you got the mail from AWS about updating your client side certificate for RDS connection, this is a very simple explanation about what you need to do. Your first concern is functionality of your application should not be affected and everything should be working as smoothly as now.

When you connect to database, either you connect to it securely or non-securely. If you connect to database non-securely, you do not need to worry about the mail you received as it affects only those connection which are made securely. Suppose your RDS is mysql or postgres and you connect to it from php framework or Python, you need to check the database connection code and if ssl/tls parameter is defined with client certificate file, it means you are using secured method to connect to RDS. You need to download latest client side certificate file from here or here and replace with existing one.

Another way of checking it, you need to check the parameter rds.force_ssl in parameter groups settings of your RDS instance. If its value is set to 0, it means insecure RDS conenction can also be made but if its value is 1 then no insecure connection can be made. Your code must have used client side certificate file to make it working successfully.

Similarly if force ssl is on, you can not connect to database on command line without addressing client side certificate file.
Here is an example to connect Postgresql RDS server on comamnd line
psql -h testpg.cdhmuqifdpib.us-east-1.rds.amazonaws.com -p 5432 "dbname=testpg user=testuser sslrootcert=rds-ca-2015-root.pem sslmode=verify-full"

Secure Mysql connection on RDS
mysql -h myinstance.c9akciq32.rds-us-east-1.amazonaws.com --ssl-ca=[full path]rds-combined-ca-bundle.pem --ssl-mode=VERIFY_IDENTITY

mysql -h myinstance.c9akciq32.rds-us-east-1.amazonaws.com --ssl-ca=[full path]rds-combined-ca-bundle.pem --ssl-verify-server-cert
If you enable set rds.force_ssl and restart your instance, non-SSL connections are refused with the following message for postgresql
psql: FATAL: no pg_hba.conf entry for host "host.ip", user "someuser", database "postgres", SSL off
and similar message will be displayed for mysql and other RDS database types.

Thursday, December 12, 2019

Add Swap Space in AWS EC2 Centos or Linux AMI

Here are the steps to create swap space in AWS EC2 CentOS or Linux AMI

1. Create an EBS volume (ssd gp2) of the size you want for your swap space. Suppose it is 4G.
2. Attach the volume into your instance. Suppose attached mount point is /dev/xvdf
3. Now run
sudo mkswap /dev/xvdf

4. sudo swapon /dev/xvdf
5. Edit file /etc/fstab and add following line in /etc/fstab
/dev/xvdf none swap sw 0 0
6. Now you can verify the added swap space.
sudo swapon --show

Postfix : postdrop warning unable to look up public/pickup

Your Postfix is running successfully but it is not sending any mails. If you get above error in log, here is the solution for you.

sudo mkfifo /var/spool/postfix/public/pickup
sudo service postfix restart


After running both commands, your issue should be fixed and mails should be sent successfully.

Gitlab - Stop Showing your gitlab setup and repositories in google search results


If your gitlab setup is accessible by global url but you do not want to show results in Google, here is solution for you.

1. You can make your gitlab url restricted to limited IPs only. Gitlab works on default port 89. You must have used a web server to serve the gitlab url globally. Your web server may be apache, nginx or some other web server. You can add  attributes in your virtualhost which will stop accessing gitlab from undefined IPs.

2a. Sometimes you need to access gitlab from undefined IPs and it is not feasible to change the Virtualhost setting every time. If you can not restrict your gitlab setup to some IPS but still you do not want to be in Google search results, you can try this solution.
Always make your repository and group private. Do not make any public repository or any public group. Public repositories and public groups are visible in google search results. Gitlab has settings in admin area. After enabling it, no registered user can create a public repository/group. Only admin will have access to create public repository and group.
Here is the settings.

Admin Area > Settings > General > Visibility and access controls > Restricted visibility levels

Check the box Public.
Now no registered user can create a Public repository and Group.

2b. After following solution 2a, you need to implement solution 2b. Like every other web application, gitlab too has robots.txt file.

robots.txt is a direction for search engines and crawlers. They follow it blindly. If you write a rule to not allowing your site in search result, your web application will not be listed.

By default gitlab allows to show login page and explore page to list in search results. Explore page contains list of all public repositories and groups and if your gitlab has some of them, it will be listed in search results. You need to modify your gitlab robots.txt. Here is the path.

/opt/gitlab/embedded/service/gitlab-rails/public/robots.txt

Now comment every single line except these two

 User-Agent: *
 Disallow: /

it will restrict to show your gitlab url in search results. If it is already listed, once you make changes in robots.txt, it will be gone after some days.

Wordpress - https mixed content issue

After configuration of ssl in your wordpress site, the most annoying problem is mixed content error. Your browser shows error "Your connection is not fully secure". Here is the solution for you to fix the issue.

Solution : 1
1. You need to change all urls in code from http to https manually. If there are any js files in any plugin or fusion directory of Uploads, you need to find those and replace them.
2. You can install Go Live plugin in your wordpress site to replace all urls in database. It provides easy option to replace all urls with http into https.

OR

Solution : 2
There is one easy solution, you can install a wordpress plugin name "Really Simple ssl". Once you activate it, it delivers all the urls in browser with https and you do not get same error again.
Let me know which solution worked for you.

Monday, November 11, 2019

Run php 7 project in docker container where php 5.6 project is running on Host

Run multiple php versions in docker and host with apache / Run multiple websites on php 7 docker without affecting php 5.6 of host.

If one of your php project is running on php 5.6 of ubuntu/centos/fedora, Now if you want to run another project with php 7 inside docker container. It is quite good idea to run multiple php projects on same machine.

1) Suppose you have container where apache2 and php 7 is running

2) You have already launched this container with mapped port and htdocs path.
docker run -it -v /var/www/html:/var/www/html -p 7030:80 ubuntu:16.04 /bin/bash
As our 80 port of host is already busy, php 5.6 project is running there so we mapped 80 port of docker with 7030 port of host.
We have also mounted /var/www/html of host in /var/www/html of docker. We do not need to go inside docker container to access project files, it can be accessed in html folder of host as we already mapped host and docker directory.

3) Point your subdomain to the public IP of host and add a virtualhost in apache config of host.
<VirtualHost *:80>
ServerName php7.project.com
ProxyPreserveHost On
ProxyRequests off
ProxyPass / http://127.0.0.1:7030/
ProxyPassReverse / http://127.0.0.1:7030/
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
We have proxy passed to port 7030 as 80 port of docker apache is mapped with 7030 port of host.
'ProxyPreserveHost On' parameter, does not show proxy-passed ip for js and css. It keeps the servername on every page.

5) Now create a file php7project.conf in /etc/apache2/sites-available of docker-apache with following content
<VirtualHost *:80>
ServerName php7.project.com
DocumentRoot /var/www/html/projectfoldername
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

6) Enable the site.
cd /etc/apache2/sites-available
a2ensite php7project.conf

7) Now virtualhostof host apache is mapped with virtualhost of docker apache and it will open the site.
http://php7.project.com/

8) Once you make the correct database connection in your docker apache project, your site is ready to access.

Metabase - Shifting your metabase docker container with existing data

1) Export the docker container from source instance as .gz
docker export containerid | gzip > containername.gz
2) Import the docker container into destination instance.
zcat containername.gz | docker import - imagename
3) Run the container as mapped port 3000
sudo docker run -it -p 3000:3000 6cf7f6ca9351 /bin/bash
where 6cf7f6ca9351 is image id.
4) Start the container and attach yourself into it.
docker start containerid
docker attach containerid

5) If you will run the file metabase.jar
java -jar metabase.jar
and if it creates new files metabase.db.mv.db and metabase.db.trace.db, it means metabase is starting the setup. It will be the fresh metabase installation so your old created dashboards and questions will not be visible.
You need to find the files of existing database i.e. metabase.db.mv.db, metabase.db.trace.db in the container which will be large in size. These are existing db files, you need to make metabase used these files as a database then only metabase will be run from the current state.
If you have started metabase and it has created new db files, stop the metabase and replace new files with existing files.

6) Start metabase again.
Once "Metabase Initialization COMPLETE" message appears and you go on login page instead of setup, old metabase is back.
You can login with existing details. Now make changes in database credentials from admin panel if database is changed on new instance, dashboards and questions will start showing your data from changed database.

Errors :
1) Connections could not be acquired from the underlying database.
Solution : Make sure metabase is able to make connection to database.

2) Java heap space memory issues.
Solution : RAM should be sufficient.
You can assign RAM to metabase while running it.
Generally, leaving 1-2 GB of RAM for these other processes should be enough; for example, you might set -Xmx to 1g for an instance with 2 GB of RAM, 2g for one with 4 GB of RAM, 6g for an instance with 8 GB of RAM, and so forth. You may need to experiment with these settings a bit to find the right number.
For 2GB RAM
java -Xmx1g -jar metabase.jar
More accute solution :
java -Xmx1g -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/a/directory -jar metabase.jar

Thursday, October 17, 2019

Connect to RDS through EC2 on local mysql workbench

If you are connecting AWS RDS through EC2 instance from mysql workbench on your local using method 'Standard TCP/IP over SSH' as you do not want to open mysql port 3306 globally. You have opened 3306 port of RDS for EC2 instance only as it should be connected from EC2 only.

 You first connect to EC2 using ssh (key file) and then EC2 makes connection to RDS. This is quite common approach to access RDS database on local.

But in this approach you may get error
Failed to Connect to MySQL at 3306 through SSH tunnel at with user
"Lost connection to MySQL server at 'reading initial communication packet, system error: 0"



The reason behind this error, ssh config is not allowing tcp forwarding. You need to make it allowed.
Open file /etc/ssh/sshd_config and check attribute 'AllowTcpForwarding'. The value is set to no that;s why you are getting the error.
The value should be Yes for the parameter.
AllowTcpForwarding Yes
Now after changing value, restart ssh and try to connect again from mysql workbench. The connection should be made successfully.

apache2 - Block multiple ips for one of the hosted website on CentOS / Ubuntu

If you have hosted multiple websites on apache of your linux server by adding multiple virtual hosts, there may be certain requirement that you do not want a particular website should be opned by any specific IP. Here is the way to block a website for a IP.

Solution : 1
Create an .htaccess file in document root of your website or if the file exists already, add following lines in the file.
deny from 181.39.xx.xxx
deny from 134.249.xx.x
deny from 112.193.xxx.xxx
Deny from 45.40.xxx.xx
deny from 103.21.xxx.xx
or block a complete IP range by
deny from 181.39.0.0/16
It will block IP range from 181.39.0.0 to 181.39.255.255

Solution : 2
If you want to block IPs by apache configuration.
Create a file /etc/apache2/conf-available/ipblack.conf
Add following content in it.
<Location />
deny from 181.39.xx.xxx
deny from 134.249.xx.x
deny from 112.193.xxx.xxx
Deny from 45.40.xxx.xx
</Location>
Now add the line in virtualhost of your website
Include conf-available/ipblack.conf
Reload apache.
These IPs will not be able to open your site anymore. They will get 403 forbidden error.

Monday, September 2, 2019

Apache Django - Site is https but api, media, static and internal urls in http

You have configured your django application in Apache as https but when you open the web page in browser, still it shows internal urls like media static and API urls in http. You are using internal urls as absolute urls and you have configured https in all required places in settings.py but still no luck. Here is solution for you.

Solution :
You need to make sure apache forwards the client's request scheme as https for Django. You need to add the following line in your virtualhost.
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
RequestHeader set "X-Forwarded-SSL" expr=%{HTTPS}

Restart Apache.

If you are using nginx and you are facing same issue, you need to add similar line in your virtualhost in nginx configuration file.

proxy_set_header X-Forwarded-Proto $scheme;
Restart Nginx.
Now load the web page. All internal urls should be opened as https.

gitlab Error - execute[semodule -i /opt/gitlab/embedded/selinux/rhel/7/gitlab-7.2.0-ssh-keygen.pp] (gitlab::selinux line 20) had an error: Errno::ENOENT: No such file or directory - semodule


While configuring gitlab if you get the above error, it means either the module semodule ( /usr/sbin/semodule) does not exist or the way you are configuring gitlab  does have access to execute the file /usr/sbin/semodule.
Are you using any cron or shell script to configure gitlab?
Because if you are configuring gitlab on terminal by user root, it is not possible you get this error if no respective module is corrupted.

Error executing action `run` on resource 'execute[semodule -i /opt/gitlab/embedded/selinux/rhel/7/gitlab-7.2.0-ssh-keygen.pp]'
Solution :
gitlab has its own bin directory where it has all the necessary executable files.
Default Path of the directory is /opt/gitlab/bin. (You can find this path in gitlab configuration file i.e. /etc/gitlab/gitlab.rb)
Make a soft link for the file.
sudo ln -s /usr/sbin/semodule /opt/gitlab/bin

Now configure gitlab using same methid, you should not get the error.

Orangehrm / Symphony Error - Parse error: syntax error, unexpected ''mo' (T_ENCAPSED_AND_WHITESPACE), expecting ') symfony/cache/orangehrm/prod/config/config_autoload.yml.php

Solution :
1. Check your disk usage, is it filled completely? Empty some space, restart Apache and check again.
2. If above solution is not applicable,
Remove the file symfony/cache/orangehrm/prod/config/config_autoload.yml.php. Now open the page in browser again. It will create new config_autoload.yml.php. Error should be gone now.
Make sure the directory has write permissions to create the file.

Java - Run jar file on non default port

If you are running a jar file using command
java -jar filename.jar
Suppose it runs on port 8080. But you have already another service like Tomcat or Jenkins running on port 8080 so your jar file can not run on port 8080. Here is an option '-Dserver.port', Using the option you can choose the port on which your jar file should be run.
java -Dserver.port=8999 -jar filename.jar
Now your service will run on port 8999.

Tuesday, August 20, 2019

mysqldump - sql dump with triggers, procedures, functions

--routines, -R

           Include stored routines (procedures and functions) for the dumped databases in the output. This option requires the SELECT privilege
           for the mysql.proc table.

           The output generated by using --routines contains CREATE PROCEDURE and CREATE FUNCTION statements to create the routines. However,
           these statements do not include attributes such as the routine creation and modification timestamps, so when the routines are reloaded,
           they are created with timestamps equal to the reload time.

           If you require routines to be created with their original timestamp attributes, do not use --routines. Instead, dump and reload the
           contents of the mysql.proc table directly, using a MySQL account that has appropriate privileges for the mysql database.

Example :
mysqldump --routines -u username -p database_name > database_name.sql

mysqldump error

1044: Access denied for user 'username'@'localhost' to database 'database_name' when using LOCK TABLES

Solution :
Use --single-transaction with mysqldump.
Example :
mysqldump --single-transaction --routines -u username -p database_name > database_name.sql

Saturday, July 13, 2019

Postfix : SMTP; Client does not have permissions to send as this sender

If you are trying to send mails through Postfix using SMTP credentials and you are getting above error, here is the solution for you.

Check the 'from address' in the log file, this is not the same email address what's your login email id. To make the 'from address' same as login email id add following lines in the file /etc/postfix/main.cf
canonical_maps = regexp:/etc/postfix/canonical
canonical_classes = envelope_sender
canonical_maps = regexp:/etc/postfix/canonical

Now create a file /etc/postfix/canonical and add your smtp login email address in the file
// username@domainname.com
(user both back slashes in the file)

Now it will send mails by this 'from address'.

Postfix : SASL authentication failed

SASL authentication failed; cannot authenticate to server smtp.server.net[52.75.xx.x]: generic failure

If you are trying to send mails through Postfix using SMTP credentials and you are getting above error, here is the solution for you.

You need to add the following line in the file /etc/postfix/main.cf
smtp_sasl_mechanism_filter = login
Restart postfix.

I am assuming, you have added following parameters in main.cf file already.
relayhost = [smtp.server.net]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options =
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_use_tls = yes

dynamodump error : u'message': u'The security token included in the request is invalid.

Solution :
You have not added access keys and secret keys in the file $HOME/.aws/credentials. Add the content in the file.

cat ~/.aws/credentials
[default]
aws_access_key_id = AKIAJSEXAMPLEHXI3BJQ
aws_secret_access_key = 5bEYu26084qExAmPlE/f2pz4gviSfoOgSaMpLe39

PDF Compression Error

FPDF error: This document (testcopy.pdf) probably uses a compression technique which is not supported by the free parser shipped with FPDI

Solution : 
Download the file from here according to your operating system.
Mine is Linux x86 (64 bit) so I have downloaded Ghostscript 9.27 for Linux x86 (64 bit)
I renamed the file as gslin64. Original filename was gs-927-linux-x86_64.
Save it in /usr/bin.
Now run the following command
gslin64 -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dNOPAUSE -dQUIET -dBATCH -sOutputFile=/tmp/abc.pdf  input.pdf
where input.pdf is error pdf file and abc.pdf is error free output file which is saved inside /tmp directory 

sign_and_send_pubkey: signing failed: agent refused operation

If you are getting this ssh error while connection to any remote instance, you might have changed public and private keys recently.

To fix this issue you need to attach your ssh keys with running ssh agent. Run command
ssh-add

If you get warning 'Permissions are too open.' You need to give 400 permission to your ssh keys.

chmod 400 ~/.ssh/id_rsa
chmod 400 ~/.ssh/id_rsa.pub

Now it should not show error ' signing failed: agent refused operation' while connecting to remote host.

Sunday, June 9, 2019

Apache - Make specific GET request forbidden by matching pattern

If there are certain get requests which you want to make forbidden (403) on your server by matching pattern, you need to write certain rules in .htaccess or apache configuration file.

Here you can add following snippets in your apache configuration file and it will block all the GET requests which will match the pattern.

Suppose an http request is
http://porcupine.com/paymentcontroller.php?id=oculus&name=johnathan
You can block this request either by id or by name or by both. I am blocking by id.
<If "%{QUERY_STRING} =~ /id=oculus/">
  Require all denied
</If>

Reload apache.
Now all the requests contain text 'id=oculus' will be forbidden.

DynamoDB Backup and Restore

If you are using DynamoDB on AWS and facing problem while importing and exporting it, here is solution for you.

In this solution, you need to download a python package from github and you can easily take backup of tables on your local or any s3 bucket.

1. Clone the dynamodump script from github.
git clone https://github.com/bchew/dynamodump.git

2. cd into the directory
cd dynamodump

3. Now you can take backup of one table, multiple or whole database.
Suppose you want to take backup of one table.
python dynamodump.py -m backup -r aws-region-name -s dynamo-tablename
In this case my aws region is us-west-1 and table name is users_profile.
python dynamodump.py -m backup -r us-west-1 -s users_profile
It will take backup of table users_profile in the directory name dump inside cloned directory dynamodump.

If you want to restore this table. Either you want to restore on local or you want to restore on AWS
a) To restore on local
python dynamodump.py -m restore -r us-west-1 -s users_profile
b) To restore on AWS
To restore table on AWS, you should have .boto file in your home directory with access and secret keys.
cat ~/.boto
[Credentials]
aws_access_key_id = AKIAJSIE27KKMHXI3BJQ
aws_secret_access_key = 5bEYu26084qjSFyclM/f2pz4gviSfoOg+mFwBH39

aws_access_key_id or aws keys credentials should be configured, it is stored in ~/.aws/credentials 
cat ~/.aws/credentials
[default]
aws_access_key_id = AKIAJSIE27KKMHXI3BJQ
aws_secret_access_key = 5bEYu26084qjSFyclM/f2pz4gviSfoOg+mFwBH39

These access and secret keys should have access to import table/database into DynamoDB of your AWS account
python dynamodump.py -m restore -r us-west-1 -s *

4. Similarly you can take backup of complete database as well as you can restore it.
python dynamodump.py -m backup -r us-west-1 -s *
It will take backup of all tables in the directory name dump inside cloned directory dynamodump.

If you want to restore all tables. Either you want to restore on local or you want to restore on AWS
a) To restore on local
python dynamodump.py -m restore -r us-west-1 -s *

b) To restore on AWS
To restore table on AWS, you should have .boto file with access and secret keys or .aws directory with credentials in your home directory.
python dynamodump.py -m restore -r us-west-1 -s *

5. If you want to take backup of dynamodb in your s3 bucket, your access and secret keys should have access to upload content in s3 bucket.
python dynamodump.py -m backup -r region-name -s * -a zip -b s3_bucket_name
In this case my aws region is us-west-1 and s3 bucket name is oculus-db-backup.
python dynamodump.py -m backup -r us-west-1 -s * -a zip -b oculus-db-backup
It will copy dump.zip in your s3 bucket. dump.zip content all exported json files of your dynamodb.

Source :
https://github.com/bchew/dynamodump

Sunday, May 12, 2019

Burp Suite - Not able to intercept android app

If your burp suite  was working fine for intercepting mobile application and suddenly it has stopped working. To fix this issue download latest burp suite from PortSwigger.net download section and install it.

If you are not able to intercept some mobile applications, the reason may be some of the applications are using https protocol and the application for which burp suite worked, it might be used http protocol.

To intercept traffic for mobile application with https APIs, run the burp suite and open it in browser.
Suppose you are running it on 8080 port (Default), open url http://localhost:8080 in browser.
It will look like this.



.der file will be downloaded. Convert the file into pem file
openssl x509 -inform der -in /root/Documents/cacert.der -out /tmp/burp.pem

Browse this pem file in your mobile device and add into 'Add certificate' option of your device. In android device, you can find the option under Settings > Security or Settings >  WLAN > More > Advanced > Install certificates
Once certificate will be installed, you may get notification about network monitoring.
Now try to intercept the app again, it should work fine.

Note : This tutorial is for ethical penetration testing purpose.

Ubuntu Firefox and Chrome - Read History on Command Line

You are very habitual of command line and you do most of your tasks like copying data, report generation, analyzing logs using terminal, you must have thought before if there is a way to read history of browser using command line.

Well yes there is definitely a way to complete this task. Browsers save the history in sqlite files. If you know basic queries of SQL, you can read history on terminal.

Firefox History using Terminal
cd ~/.mozilla/firefox
There must be a folder name with random string. Something like c18jclvi.default
cd ~/c18jclvi.default
You can find the sqlite file here i.e. places.sqlite.
Copy this file somewhere else like /tmp, If you use the file in firefox directory you may get "Error: database is locked" 
Open sqlite command prompt using following command.
sqlite3 /tmp/places.sqlite
To list all tables, run query
sqlite> .tables
You can find the url history in table moz_places.
sqlite> select * from moz_places;

To decode timestamp, first find the timestamp in the row. Usually 10th column is timestamp's column.
Divide this number buy 1000000. Now run command
date -d @1557653257.768815
It will display the correct date and time of this visited url.

Chrome History using Terminal
Similarly you can display chrome browser history in terminal.
cd ~/.config/google-chrome/Default
Copy filename History in some other place like /tmp. If you use the file in Chrome directory you may get "Error: database is locked" 
sqlite3 /tmp/History
To list all tables, run query
sqlite> .tables
You can find the url history in table urls.
sqlite> select * from urls;

Wednesday, April 3, 2019

Nginx Custom 404 Not Found Error or Internal Server Error page for Django

If you are trying to configure nginx custom error page for a Python web application like Django but it always shows Django Default 404 Error Page, here is solution for you.

All solutions you found out but they are not working because they are showing you solution on custom error page for application which does not need proxy pass, Django runs on port 8000 (By Default) and you must have used proxy pass in your nginx virtual host for domain configuration.

You need to give one extra attribute in your virtual host and all will work well.

1) Create an index.html in /usr/share/nginx/html/index.html

2) Add following lines under active virtualhost     

        proxy_intercept_errors on;
        error_page 500 502 503 504 404 /index.html;
        location = /index.html {
        root /usr/share/nginx/html;


     
3) Reload nginx. Now open an error page or 404 page. It will open index.html content in browser.

Note : Your solutions were not working because you did not use attribute `proxy_intercept_errors on;`
 For a proxy pass virtualhost, this attribute is important.

s3cmd ERROR: Test failed: 400 (InvalidToken): The provided token is malformed or otherwise invalid

If you are trying to configure your AWS s3 bucket in s3cmd and you get Test Failed Error 400 although you have used correct secret and access keys. Here is solution for you.

You need to export both keys on terminal before configuring them.

Run these commands :
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE



export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY


Now configure
s3cmd --configure

You will not get the error while testing credentials.

nginx htpasswd for a django or proxy pass url location

When you try to configure htpasswd for a url location in nginx, you may get error connection time out, Incorrect Redirect or 404 not found. Most of the time reason is incorrectly configured proxy pass. If there is python or ruby based app where you have used proxy pass to access your web application, this error is very much common.

Tricky part if you try to set an htpasswd on login page but you do not land on My account page after login. It shows 404 not found and when you remove your htpasswd, this error does not show up after entering your login credentials.

So how can you set htpasswd successfully for a url location?

Solution :
Open nginx configuration file where you have set virtualhost for your domain. Add this location snippet in it. Add it below location / { ... }

Suppose you want to set htpasswd on login url i.e. http://domain.com/login. It should prompt htpasswd. Rest of the site should work fine without htpasswd.
  location /login {

        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host; 
        proxy_pass         http://127.0.0.1:8000$uri;
        auth_basic "Restricted Content";
        auth_basic_user_file /etc/nginx/.htpasswd;
    }
Use $uri in proxy_pass attribute after port. You will not get 404 Not Found error after login.

Saturday, March 16, 2019

Linux Ubuntu Skype : Too Much Noise at Other End

If you are using Ubuntu operating system and other person is not able to hear you because of too much noise at his/her end. If every thing is fine at his/her end then this may be issue of your sounds settings. Here is a small tip to fix noise distortion for the person at opposite end.

In your Ubuntu Operating system, open Sound Settings. You can open this using right click at sound icon on panel or go to system settings and click on sound.
Here Under Input Tab, make the input volume selection minimum (7 to 8% of total value)
Make sure you have done this in Input tab, leave default settings for output tab.



This should solve your noise issue. 

Monday, February 11, 2019

wkhtmltopdf error : QXcbConnection: Could not connect to display

Solution :
Make sure you have installed wkhtmltopdf with patched QT version.
wkhtmltopdf --version
wkhtmltopdf 0.x.x (with patched qt)

 Download the latest wkhtmltopdf from here and install accordingly.

host smtp.gmail.com said: 530-5.5.1 Authentication Required.

Postfix Error :

gmail smtp error : While sending mail through postfix using gmail smtp details

host smtp.gmail.com[74.125.24.108] said: 530-5.5.1 Authentication Required. Learn more at 530 5.5.1  https://support.google.com/mail/?p=WantAuthError g70sm5432169pfg.98 - gsmtp

Solution :
1. Make sure you have added correct port 587 in main.cf and sasl_password
2. Make secure less secure app is on in your gmail account.



bitnami - Change phpmyadmin url and access using public IP

Since we are going to make the url public, it is good practice if we make it non-predictable.
sudo nano /opt/bitnami/apps/phpmyadmin/conf/httpd-prefix.conf
Alias /phpMyAdminZAQWDCFRTdf1236 "/opt/bitnami/apps/phpmyadmin/htdocs"

Now give the privileges to make it public.

sudo nano /opt/bitnami/apps/phpmyadmin/conf/httpd-app.conf
<IfVersion >= 2.3>
Require all granted
</IfVersion>

bitnami mysql password
cat /home/bitnami/bitnami_application_password
This is wp-admin password / mysql root password
cat  /home/bitnami/bitnami_credentials
Welcome to the Bitnami WordPress Stack

Tuesday, January 8, 2019

Linux System - date not changed using command line

If you are trying to change date of a Linux system using command `date -s`, it shows changed date as a result but when you run date command again, it shows old date. You are getting this issue because there may be reason that NTP is enabled.

Solution :

Disable the NTP using command.

sudo timedatectl set-ntp 0

Now change date again using `date -s` command. Now date should be changed successfully.

ERROR: The Compose file './docker-compose.yml' is invalid because Unsupported config option for services 'healthcheck'

While building and starting the container with command

docker-compose up -d

You get following error

ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.db: 'healthcheck'
Unsupported config option for services.redis: 'healthcheck'


Solution :
Check your docker-compose version.

docker-compose --version 

Your docker-compose has lower version than 1.9.0. Upgrade docker compose using pip install and check docker version in /usr/local/bin/docker-compose.

/usr/local/bin/docker-compose --version

After upgrading if you are getting same issue again, check your docker compose version again using

docker-compose --version
/usr/local/bin/docker-compose --version

If both are different, use upgraded docker-compose with absolute path like

/usr/local/bin/docker-compose up -d

Nginx ssl error : Failed SSL: error:0D0680A8

If you are getting this error while restarting / reloading nginx after making changes in ssl configuration, following solution may work for you.

Error :
[emerg] 26062#26062: PEM_read_bio_X509_AUX(".com.key") failed (SSL: error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error:Type=X509_CINF error:0D08303A:asn1 encoding routines:ASN1_TEMPLATE_NOEXP_D2I:nested asn1 error:Field=cert_info, Type=X509 error:0906700D:PEM routines:PEM_ASN1_read_bio:ASN1 lib)


Solution :
1. Make sure you have generated correct private key file for your domain.
2. Make sure you have generated correct digital certificate file (crt) for your domain.
3. Make sure you have added private key file for attribute ssl_certificate_key and digital certificate file for attribute ssl_certificate in your nginx configuration file.

 ssl_certificate_key /etc/nginx/ssl/domain.com.key;
 ssl_certificate /etc/nginx/ssl/domain/960668e6d2dd456e.crt;