December 25, 2010

How to make a banner from the command line

I thought I'd send out some command line Christmas cheer. What better way than to use figlet.

mike@shiner $ figlet 'Merry XMAS!'
 __  __                       __  ____  __    _    ____  _ 
|  \/  | ___ _ __ _ __ _   _  \ \/ /  \/  |  / \  / ___|| |
| |\/| |/ _ \ '__| '__| | | |  \  /| |\/| | / _ \ \___ \| |
| |  | |  __/ |  | |  | |_| |  /  \| |  | |/ ___ \ ___) |_|
|_|  |_|\___|_|  |_|   \__, | /_/\_\_|  |_/_/   \_\____/(_)
                       |___/                               

December 22, 2010

How to delete the last lines of a file

A friend of mine showed me a way to delete the last lines of a file after he read the article on how to delete the first lines of a file. The following is an example of how he did this with sed.

mike@shiner $ sed '$d' really_big_file.txt > new_file.txt

You'll need to use the single quotes to prevent the shell, such as bash, from interpreting $d as a variable. The '$' symbol matches the last line, similar to how it matches the end of the line in a regular expression. The 'd' tells sed that we want to delete the line.

Another way to do this, albeit not as elegant, is exemplified next.

mike@shiner $ tac really_big_file.txt | sed 1d | tac > new_file.txt

If you've never seen the tac command, it's similar to cat, but it's cat in reverse. I think it's kind of nifty that cat spelled backwards is tac. This helps with remembering the command.

Here's what I'm doing in each step:

1. cat the file in reverse.
2. Delete the first line.
3. cat the file in reverse again.
4. Save the output to 'new_file.txt'.

The end result is the same as before, but I can see someone arguing against it because of readability.

December 19, 2010

How to find files larger than X

You can use the following command to find files that are larger than 50M.

mike@shiner $ find . -type f -size +50000k

The -type argument tells find that we want to look at files and the -size argument specifies files larger than 50,000k. We can use the following command to get a nicely formatted output with the name of the file and the size.

mike@shiner $ find . -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }' 

Alternatively, we can do the following to get the same output.

mike@shiner $ find . -type f -size +50000k | xargs ls -lh | awk '{ print $9 ": " $5 }' 

xargs is a handy command that takes each line of input and passes it to the proceeding command, which in this case is ls -lh.

December 17, 2010

How to change the default ssh port

On Ubuntu, we can edit the '/etc/ssh/sshd_config' file to change the port our ssh server is listening on. You should find something similar to the following in this file.

Port 22

You can change this value to any integer between 1025 and 65536 (inclusive). You should pick a value that won't conflict with any other services. If you want to limit ssh access to specific users, then you can add the users to the 'AllowUsers' setting separated by spaces. Once you make the changes, you'll need to reload ssh to use the new configurations.

mike@shiner $ sudo /etc/init.d/ssh reload
[sudo] password for mike: 
 * Reloading OpenBSD Secure Shell server's configuration sshd
   ...done.

You should stay logged in until you can test the configuration by logging in from a different machine. If we changed the 'Port' value to 4200, then we can use the following command to specify port 4200 on the remote machine for ssh.

mike@primus $ ssh -p 4200 mike@shiner

December 15, 2010

How to create a path of directories

I've often times created a directory structure using a series of mkdir commands. For example,

mike@shiner $ mkdir foo
mike@shiner $ mkdir foo/bar
mike@shiner $ mkdir foo/bar/test

You can create the same directory structure as above by using the -p option.

mike@shiner $ mkdir -p foo/bar/test

December 13, 2010

How to execute vim commands from the command line

I was trying to execute a regular expression over multiple lines when I ran across an example of how to execute vim commands from the command line. I happen to know how to do this from inside vim, so this was a serendipitous find. First, I'm going to show an example of how to do a simple replace. Here's the current contents of our test file

mike@shiner $ cat test_regex.txt
this is
a test

We can run the following command to replace 'this' with 'there'.

mike@shiner $ vim test_regex.txt -c '%s/this/there/' -c 'wq'
mike@shiner $ cat test_regex.txt
there is
a test

The -c arguments are executed sequentially. They're equivalent to typing : and then the contents of the argument. In the example above, we do a search/replace and then write/quit the file. If the "-c 'wq'" is omitted, then vim will remain open. Now back to what I was trying to do originally.

mike@shiner $ vim test_regex.txt -c '%s/there\_.*a/regex' -c 'wq'
mike@shiner $ cat test_regex.txt
regex test

The \_ tells vim that the . will include the new line character. Hence, the expression states that we want to match 'there' and 'a' and anything between these characters. This match is replaced with 'regex'.

December 11, 2010

How to remove a column from a file

I often need to remove a column from a character delimited file so I can do some additional processing, such as counting up values. I'm going to use a file with the following contents as an example.


mike@shiner $ cat name_age_city.txt
Mike|35|New York
Jason|28|Dallas
Fox|32|Washington
Mike|28|Los Angeles
Ann|23|Denver
Dana|32|Louisville
Mike|32|Atlantic City
Ryan|30|Austin

We can use the cut command to extract the column with the age information.

mike@shiner $ cat name_age_city.txt | cut -d '|' -f 2
35
28
32
28
23
32
32
30

The -d option tells cut that the delimiter for the columns is the '|' character and the -f option specifies the second column. Let's say we want to know how many people have the same age. We can use the following command to enumerate these values for us.

mike@shiner $ cat name_age_city.txt | cut -d '|' -f 2 | sort | uniq -c
      1 23
      2 28
      1 30
      3 32
      1 35


If we need to extract multiple columns, cut gives us this functionality as well.

mike@shiner $ cat name_age_city.txt | cut -d '|' -f 1,3
Mike|New York
Jason|Dallas
Fox|Washington
Mike|Los Angeles
Ann|Denver
Dana|Louisville
Mike|Atlantic City
Ryan|Austin

Unfortunately, cut won't allow us to reorder the columns. We get the same results if we try "cut -d '|' -f 3,1". On a side note, we can use awk to give us functionality similar to cut. One advantage awk has over cut is the ability to reorder the columns.

mike@shiner $ cat name_age_city.txt | awk -F '|' '{print $3"|"$1}'
New York|Mike
Dallas|Jason
Washington|Fox
Los Angeles|Mike
Denver|Ann
Louisville|Dana
Atlantic City|Mike
Austin|Ryan


The awk syntax requires us to put the '|' character in the print output to get the same behavior as cut.

December 8, 2010

How to delete the first lines of a file

I had to delete the first line of a file today, but I didn't want to open the file because it was close to 1 GB in size. A file this large will make most editors pretty unhappy. Instead of opening an editor to just delete a line, I used the following handy command.

mike@shiner $ sed 1d really_big_file.txt > new_file.txt

The '1d' tells sed that we want to delete line 1. The next command will delete the first three lines. The '1,3' is defining a range from line 1 to line 3.

mike@shiner $ sed 1,3d really_big_file.txt > new_file.txt

December 7, 2010

How to split a file

I often have a file that's too big. A great way to split the file is to use.....wait for it.....the split command. I usually split a file by line count. Here's an example invocation:

mike@shiner $ split -l 100 -d test_file.txt test_file_

The -l option defines the maximum number of lines per file. If 'test_file.txt' has 951 lines, then we'll end up with 10 files where we have 9 with 100 lines each and 1 with 51 lines. The -d option tells split to use numeric suffixes. 'test_file.txt' is the file we're going to split and 'test_file_' is the prefix used for each of the files.

mike@shiner $ ls
test_file.txt  test_file_02  test_file_05  test_file_08
test_file_00   test_file_03  test_file_06  test_file_09
test_file_01   test_file_04  test_file_07