Menu

IAmRoberson.com

Just a Tech Guy

How To Automate Ubuntu Linux Backups (part 2)

In this second part of HTAULB we will be visiting the topic of how to automate MySQL database backups. Although I will be the first to admit this is completely simple and a very nice way of accomplishing this task, I will also advise that it is not the most secure. I would not recommend this to anyone sharing a server or its resources with other people. You will be hard coding your SQL password into a script and although you could restrict ‘Read’ permissions from anyone else seeing it, I will advise against it on any shared machine.

Now, on with the show!

If you haven’t read the first part of this series please go back and do so, I’ll wait….

Back? Took you long enough. I had you read that because we will be doing some of the same tasks in order to automate the SQL backups.

You will need to go to the location in which you are keeping your .sh scripts. (We use /bin here at IAmRoberson)

Use the editor of your choice to edit the file using the name of your choice for the script’s name:

Add the following line for each database you wish to make a sql dump from.

mysqldump -u username -h localhost -p(password) databasename | gzip -9 > /var/backups/nameoftargetfile-$(date +%Y%m%d).db.sql.gz

Exit from the editor and chmod the file to 766. (see previous post for exact syntax.)

Add an additional line in the crontab like you did for (part 1) of this exercise.

 

You are done! Go back up some stuff now!

– Jason

 

Did this article help you any? Please feel free to leave feedback.

How To Automate Ubuntu Linux Backups (part 1)

I recently redesigned my home setup and wanted to automate my backups for my web servers and SQL server. I run Ubuntu 9.10 using a semi-typical LAMP configuration. For those of you unfamiliar with LAMP it is an acronym for Linux, Apache, MySQL, and PHP. Your first step is to determine what you want backed up. For me the /var/www and /etc/apache2 directories were my primary targets. After that you need to write a Shell script to tar/gz that directory and put it into a folder where you wish for your backups to reside. I will say the smartest thing to do is to eventually copy these files to a different location than the drive/box you’re backing up so a physical drive failure won’t take your backups with them and make this entire process moot. Sounds easy right? It is, and I’ll show you just how easy it really is.

After you’ve determined what you want backed up you need to write that shell script. Here is how to accomplish that:

From a command console on your Linux box, change to the directory you wish for your .sh script to reside. For example: /bin/

Fire up your CLI editor of choice and edit a file that will become your shell script. For me this is VIM so I would type the following:

vim backup_script.sh

Add the following lines, substituting my locations and file names with your own:

cd /var/backups/
tar -czf web_backup.$(date +%Y%m%d).tar.gz -C / var/www/  -C / etc/apache2/

The first line changes the directory to where you will drop the backup file. I know you can do this several ways but this way you can see each step of what is being accomplished.

The second line creates a tar.gz named web_backup(today’s date).tar.gz from the /var/www and /etc/apache2 directories.

Exit and save your script.

You will need to change the permissions on the file in order to run it so type the following: (again subbing ‘script_name’ with your scripts name)

chmod 766 script_name.sh

You can type the script name and it should execute. Once this has been tested and you see that a tar.gz is created in the directory you’ve specified you can move to the next step, which is to automate it with Cron.

==========

Everything work? Hope so, because we’re moving on regardless.

Next you’ll want to add this script to your crontab.

Type crontab -l    to get a list of all jobs listed in your crontab. Don’t worry if it says something about one not existing, You probably don’t have one set up yet.

Type crontab -e to edit the file.

There is a great write up that Kevin van Zonneveld did at : http://kevin.vanzonneveld.net that explains how the crontab scheduling works. I’ll give you a quick review but I recommend heading over there and checking his site out to learn more.

The crontab is set up very simply. The first part of the line is a series of numbers to regulate the schedule part and the second part is the execution you would like Cron to perform at those increments.

The default syntax is : * * * * * /blah/blah.sh

The 5 * each stand for something. They are (respectively) minute | hour | day of month * (every day of month) | of month * (every month)| weekday (0-6 Sunday being 0)

So in order to perform the task desired for example every Friday at 0100 (am) you would use:

30 1 * * 5 /bin/backup_script.sh

Clear as mud right? 😉 You get it, I know you do. Again, go see Kevin’s site for more explanation and tips/tricks.

You could schedule it for a few minutes ahead of the current time and wait to see if it works correctly before you start depending on it.

Next post will be how to automate SQL database backups with mysqldump and Cron.

Till next time, Peace Out Yo.

 

– Jason

Preventing A Disaster (with Powershell)

So, I walked into a production issue this morning at work that could have been easily prevented. A process we run at work moves files to a directory then imports the information from those files. This occurs every day and the files being moved into the import directory will not overwrite any files that are already in that directory. Long story short the files imported were the wrong ones since the ones that needed to be dropped off were moved to a staging area instead. Had I known there were files in there a 5 minute fix to prevent this would have saved me a half day’s recovery. I decided to prevent this from happening again by running a simple script.

The following script checks the directory on a server for any files with the csv extension. If it doesn’t find any, then all is well, life goes on. If files are found, it emails/pages me so I can move those files before the process runs.

/*
1      $filecheck = Test-Path "\\servername\directory\*.csv"
2      If ($filecheck) {
3            $SmtpClient = new-object system.net.mail.smtpClient
4            $SmtpServer = "localhost"
5            $SmtpClient.host = "exchange.server.address"
6
7           $msg = New-Object Net.Mail.MailMessage
8           $msg.From = "Your Server"
9           $msg.To.Add("email@address.net")
10        $msg.Subject = "Your Directory not Empty!"
11         $msg.Body = "The directory for the blah is not empty! " + (Get-Date)
12         $SmtpClient.Send($msg)
13
14                                }
*/

Ok, broken down line one sets a variable querying whether or not the path listed exists. This variable saves as a boolean.  Line two does a simple If/Else statement. In Powershell Else will be assumed. Boolean equivalent of True is If($variable) if we needed to check for false we would use If(!$variable). Lines 3-12 are the lines needed to email with SMTP.

Line 3 creates the email object.
Line 4 uses the localhost as the SMTP server.
Line 5 defines the ‘real’ server to relay the email off of. You will need to put your company’s exchange or email server’s address in here and make sure it allows relays.
Line 7 Creates the Message object.
Line 8 is Whatever you want your email From address to say. (Can be anything)
Line 9 is where you add an email address to send to. For each email address you would like to send to, just add a line identical to this one and change the email address.
Lines 10 and 11 are obvious. Change to whatever you like.
Line 12 sends the email.

Let me know if you have any questions or suggestions.

– Jason

Finding a Service’s Authentication ID

At work, us Administrators are considered prima donnas. And most of the time we accept that and will even admit to it. We like things a certain way and we tend to also get set in our ways. Recently the company hired a new VP of a department that promotes a more secure environment to work in. To protect the guilty, I’ll not use his name, so instead I’ll refer to him as Satan. *smirk* Satan has good intentions in the long run but has the social and personality skills of a pissed off bull. He has no professional courtesy nor does he wish to. Now I will also note that anyone in his position will always be considered the ‘bad guy’ but I think he really enjoys the title more than actually just accepting it as a side effect of his profession.

I say all of that to say this: We, the prima donna group have been in the habit of not changing our passwords as regularly as we should, and in cases where our bosses and end users beat us into submission about getting something to work “no matter how it’s done”, we’ve made some bad practices over the years. Such as…. running a service, scheduled task, or mapping a drive with our own credentials. Well without warning Satan decided to enforce the company policies in the middle of a work day and force a company wide password change, which under most circumstances wouldn’t be so bad… however, this broke a lot of things that people have forgotten about for years. (ie. a Service that’s been running under ‘Johnsmith’ for the last 28 months. The key point in this was “WITHOUT WARNING”, none. He is 100% correct in enforcing the rule but professional courtesy should have given us a day or two’s heads up about it so we can prepare.

Either way, I scrambled to make sure none of my servers were affected and none were however, others were. I put together this powershell script to at least run through and look for any ID’s that matched the syntax of our standard user ID’s and to kick out a report showing which ones are potential dangers.

# The first section merely defines important variables such as the log that will be written to, date, or time.

/*
$log = "c:\temp\Service_IDs.txt"                            
*/

#The section below looks for the log and if it exists, wipes it and writes a date stamp within the file.

/*
$logexist = Test-Path $log
If ($logexist -eq $true)
{
$date = Get-Date
Write-Output ("The following Information is for Servers on your list: " + ($date))| Out-File $log
Start-Sleep -Seconds 2
}
}
*/

# The section below reads from a list of servers (defined in the $servers variable and loops through each.

/*
$servers = gc "c:\scripts\powershell\servers_prod.txt"
foreach ($server in $servers){
Start-Sleep -Seconds 1
                             }
*/

#This part queries each Service on each server and searches for one who’s account login name matches the [regex] pattern.

/*
$svc=gwmi win32_service -ComputerName $server | Where-Object {$_.startname -match "s*d"}
foreach ($service in $svc){
*/

# The following section actually writes each part to a log in a format that is quicker to understand, filling in variables after each subsection.

/*
"Server: " + ($server) + " | Service Name: " + ($service.Name) + " | Service Account ID: " + ($service.startname) | Out-File -Append $log
}
}
Start-Sleep -Seconds 2
*/

# The last line merely opens the file when it’s done appending.

/*
Invoke-Item $log
*/

* Notice you see (Start-Sleep *) in the file in several locations. This is to keep the script from stepping all over itself. When run without it you will see errors where there is contention between one loop and another. Basically the script is running too fast and one section is still processing while the next one wants access to the file.

** I am by no means good at scripting. I do it on my own for personal and work reasons and am sure to be told by many that there are easier ways to accomplish the same tasks. Feel free to drop me a line or comment.

– Jason