Wednesday, March 4, 2020

Upload file on Google Drive using shell script

If you want to send backup files like sql on google drive regularly using an automated script, here is the solution for you.

1. First clone following repository from github.
git clone https://github.com/labbots/google-drive-upload.git
2. Now run following commands.
cd google-drive-upload
./google-oauth2.sh
It will ask you to add Client ID and Secret Key of your google account. To generate access key and secret key of google drive Open link https://console.developers.google.com/apis/credentials
Create Credentials > Oauth Client ID > Other > Create
Now Copy client ID and secret Key and use in your script. If it gives 403 restricted access then go to OAuth consent screen and create a project to generate client ID and secret Key.

3. Now Enter Client ID and Secret Key. It will be stored in ~/googledrive.conf To copy file on google drive, run command
sudo ./upload.sh -v -r 1zfZKk37SgPFc4k_sbAVI_Xo9U427X3KU databasefile.sql.zip
1zfZKk37SgPFc4k_sbAVI_Xo9U427X3KU is your drive foldername which you can find in url and where you want to copy the file.

4. Now you get a url of devices with a code, you need to hit the url in browser and submit the code. 

Now this device has access to upload file on google drive. Addig code is one time activity, once your google account knows about the device, it will not ask to enter code again and that's why you can create shell script and add it in cron to take regular backup automatically.

Note : 
If you have set up the cron but file is not transferred on google drive, although it is transferred using same script when you run the script manually, there may be one issue, you have set up the cron for root user. Root user is unable to find .googledrive.conf in its home directory but when you run the script manually, you may use sudo but still the user who is running the script is not root, so it is able to find .googlefrive.conf in its home directory.

To fix this issue, set up the cron for the user who configured googledrive or copy the .googledrive.conf file in root's home directory. This should solve the issue.

Export DynamoDB as csv

You must have dynamodb with lots of data and you want o see this in excel file. Dynamodb is a concept without columns but still it can exported in csv. You may have number of tables in dynamodb. Here is a way to export it.
1. Clone following repo from github.
git clone https://github.com/edasque/DynamoDBtoCSV.git
2. Install node based dependencies.
npm install
3. Fill the access key, secret key and region in the file DynamoDBtoCSV/config.json
{
    "accessKeyId": "QWERTNVEXAMPLEMUJMKI",
    "secretAccessKey": "+WSXCD/SdEfExAmPlEJkjKL/KlOiK45Iu",
    "region": "us-east-1"
}
4. Now export tables using following commands.
node dynamoDBtoCSV.js -t tableOne > tableOne.csv
node dynamoDBtoCSV.js -t tableTwo > tableTwo.csv


AWS take backup of all IAM Roles and Policies


If you want to take backup of your all IAM roles and policies, here is solution for you.

1. Configure aws cli with access key and secret key.
aws configure
2. Save all roles and policies in json file
aws iam get-account-authorization-detail > output.json
There are several ways to read this json file using a script. 

Now You have complete data of your role and policies.

No log written in Syslog and auth.log

If you want to check some errors in /var/log/syslog or /var/log/auth.log and you find both files empty, it means system is not able to write logs in both files because of permission issue or any mis configuration. Here is solution for you. 

Solution :
Reconfigure rsyslog
sudo apt-get install --reinstall rsyslog
sudo service rsyslog restart
If still it does not work, issue may be with permissions, check the owner of both files.
ls -l /var/log/syslog
ls -l /var/log/auth.log
Group and owner should be syslog & adm, if owner or group is set as root or anything else, you need to change with following commands
sudo chown syslog:adm /var/log/syslog
sudo chown syslog:adm /var/log/auth.log
Now check if it starts writing logs in both files. Still it does not work, restart the service rsyslog.
sudo service rsyslog restart
it should work now.

Ubuntu 16.04 mysql 5.6 - my.cnf changes are not taking effect

You need to check the error in syslog : /var/log/syslog

If you get something like this
kernel: [83454.649662] audit: type=1400 audit(1583063492.688:61): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/etc/mysql/my.cnf.fallback" pid=27270 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0

Here is the solution for you.

Solution :
The root cause is a bug in the MySQL 5.6 for Ubuntu 16.04

The issue is with apparmor. It is denying access to a symlinked file.
Run following command as root.
echo '/etc/mysql/** lr,' >> /etc/apparmor.d/local/usr.sbin.mysqld
or try following with sudo user
sudo echo '/etc/mysql/** lr,' >> /etc/apparmor.d/local/usr.sbin.mysqld
But if it still does not work
sudo nano  /etc/apparmor.d/local/usr.sbin.mysqld
and add following line at the end of the file.
Now reload apparmor
sudo systemctl reload apparmor
Make some changes in my.cnf and restart mysql. Changes should take effect.

s3cmd error : /usr/bin/env: ‘python2’: No such file or directory

 Solution :
It is not able to find /usr/bin/python2 or /usr/bin/python with python2.x
To solve this install the package in debian based systems like Ubuntu.

sudo apt-get install python-minimal