Linux Terminal Tips and Tricks

Want to do some wild, wacky, and wonderful stuff in your Linux terminal during testing? The following is a collection of tricks that you can use to make your life easier and to become your shell’s Sous Chef.

Assume these instructions use Bash. Certain things like loops and conditional logic are different between shells (and Bash is the default on most testing systems so it should be fine).

Also: Minimal List of Useful Bash Shortcuts - including Clément Chastagnol’s cp monfichier dir graphicshowing control key combos to move and delete on the CLI.

Sorting IP Addresses

A pain point for several testers seems to be sorting IP addresses. You should do it. It’s super fun and simple to do (and all the cool testers are doing it).

It’s not nearly as cumbersome as it might sound. The Linux sort command is equipped with version sorting, which works a treat against IP addresses. Once we sort, we can use uniq to get unique addresses and wc for a count (useful while reporting):

$ cat servers.txt							# Output raw addresses
10.4.0.0  
10.4.6.3  
10.8.9.2  
10.6.10.9  
10.6.0.5  
10.8.3.1  
10.4.6.3  
10.6.10.9  
10.8.3.1  
10.4.0.0
 
$ cat servers.txt | sort -V					# Sort IPs using version sorting
10.4.0.0  
10.4.0.0  
10.4.6.3  
10.4.6.3  
10.6.0.5  
10.6.10.9  
10.6.10.9  
10.8.3.1  
10.8.3.1  
10.8.9.2
 
$ cat servers.txt | sort -V | uniq			# Only display unique addresses
10.4.0.0  
10.4.6.3  
10.6.0.5  
10.6.10.9  
10.8.3.1  
10.8.9.2
 
$ cat servers.txt | sort -V | uniq | wc -l	# Get a count of unique addresses
6
 

Note that the use of sort and uniq is redundant in this case. You can instead use the -u flag of sort to only print unique lines. As such, sort -Vu is a replacement for sort -V | uniq (Credit to Tim Fowler).

Sorting (Pass)words by length

awk '{ print length, $0 }' password_uniq.txt | sort -n | cut -d" " -f2

Sorting by frequency of occurrence

Any time you have a list of strings and you want to sort them by how often each appears, pipe the list to this:

sort | uniq -c | sort -nr

First, sort puts them in order, then uniq -c removes duplicates and prepends an integer that is the number of times that string appears in the input, then sort -nr sorts numerically (-n), reversed (-r) so higher numbers come first.

Looping Constructs

# loop over file contents:
while read p; do
  echo "$p"
done <input.txt
 
# Same:
for u in `cat inputs.txt`; do
  echo "$u"
done
 
# Loop over filesystem objects (actually, "words" in output from ls, so...)
for i in $( ls ); do
    echo item: $i
done
 
# Loop over literals
for i in 1 2 fish cats bananas_too
do
   echo "Welcome $i"
done
 
# iterate with `jot`
jot - 1940 2010
 
# BASH 3.0 adds ranges
for i in {1..5}
do
   echo "Welcome $i times"
done
 
# BASH 4.0 adds steps {start..end..step}
for i in {0..10..2}
  do 
     echo "Welcome $i times"
 done 
 

Iterate Over IPs

Want to run a command against a bunch of IP addresses but have separate output files for each? Easy peasy.

$ for i in $(cat servers.txt); do nmap -p22 --script=ssh2-enum-algos "$i" -oN "${i}_weakssh.nmap"; done

That one-liner will give you one Nmap output file per host in “servers.txt”. Note that tab completion mostly breaks in subshells ($(...)), so I usually run cat servers.txt or whatever first, then build my loop around that.

Get whois info for a file of IPs or networks

while read p; do
echo "$p"
whois "$p" | egrep -i -e 'CIDR|NetName|email|route|inetnum'
done < networks.txt

Prepopulate URLS in Burp (REST testing?)

while read p; do
 curl --proxy http://127.0.0.1:8080/ --proto-default https -k $p
done < file_of_urls.txt

Unique by Column

Want to only output lines based on unique values from a certain column? I find myself needing to do this when I want a good screenshot of things like Nmap script output.

$ ag arcfour | head -n5
10.xxx.xxx.xx_weakssh.nmap:20:|       arcfour256  
10.xxx.xxx.xx_weakssh.nmap:21:|       arcfour128  
10.xxx.xxx.xx_weakssh.nmap:28:|       arcfour  
10.xxx.xx.xxx_weakssh.nmap:20:|       arcfour256  
10.xxx.xx.xxx_weakssh.nmap:21:|       arcfour128  
 
$ ag arcfour | sort -u -t':' -k3,3
10.xxx.xxx.xx_weakssh.nmap:28:|       arcfour  
10.xxx.xxx.xx_weakssh.nmap:21:|       arcfour128  
10.xxx.xxx.xx_weakssh.nmap:20:|       arcfour256

Explanation:

sort				# Sort command
     -u				# Unique results only, please
		-t':'		# Delimeter is a colon (":")
			  -k3,3	# Sort by and stop at key 3 (the SSH algo in this case)

As a bonus, do this if you want to count how many occurrences of each unique column there are:

ag arcfour | tr -s ' ' | awk -F ':' '{print $3}' | sort | uniq -c | sort -nr

Explanation:

awk							# Linux awk command
    -F ':'					# Field separator is a colon (":")
	       '{print $3}'		# Print the third field
		   
| sort						# Can only uniq on a sorted list. Sort first
       | uniq -c			# Uniq and give a count
	 		     | sort -nr	# Sort by count, descending order

Trimming the Fat

Repeated Spaces

Want to get rid of repeated spaces in command output? Easy peasy. The tr Linux command can be used to quickly remove whitespace. The -s option “squeezes” multiple occurrences” of the character you feed it down to one. Compare that to the -d option, which deletes all occurrences.

$ echo "test        ugly            spacing"
test        ugly            spacing
$ echo "test        ugly            spacing" | tr -s ' '
test ugly spacing

You can also use tr to make pasting targets into Metasploit way easier by replacing newlines with spaces, like so:

$ cat servers.txt
10.1.1.1
10.2.2.2
10.3.3.3
 
$ cat servers.txt | tr '\n' ' '
10.1.1.1 10.2.2.2 10.3.3.3

Specific Fields

Want to get rid of certain fields in your terminal output? Use the Linux cut command! Take for example the testssl.sh output of POODLE.

$ ag POODLE | grep VULNERABLE | tr -s ' ' | head -n1
10.xxx.xxx.xx_p443-20211011-1644.log:155: POODLE, SSL (CVE-2014-3566) VULNERABLE (NOT ok), uses SSLv3+CBC (check TLS_FALLBACK_SCSV mitigation below)

Jeez, what a hot mess. The right border of a screenshot can trim some of the fat at the end of the line, but we might be better off cutting the CVE out of the middle. To do so, you can use cut with a fancy delimiter chain:

$ ag POODLE | grep VULNERABLE | tr -s ' ' | cut -d' ' -f -3,5- | head -n1
10.148.128.59_p443-20211011-1644.log:155: POODLE, SSL VULNERABLE (NOT ok), uses SSLv3+CBC (check TLS_FALLBACK_SCSV mitigation below)

Delimiter chaining is a convenient way to cut different parts of a line without having to pipe mulitple separate cut commands. As a quick explanation, -f -3,5- means to print everything from the line beginning through field 3, then from field 5 through the end of the line.

Using ‘fc’ to re-run commands from .bash_history

[1/30 3:21 PM] Justin Angel

TIL: fc

You can select a range of bash history commands, even with negative numbers to read from the most recent commands and dump them to a file that’ll be executed after exiting the editor.

https://unix.stackexchange.com/questions/24739/how-to-execute-consecutive-commands-from-history/24740

How to execute consecutive commands from history?

Like, I’ve been scrolling through history repeating the same 8 commands for days (long story).

fc -15 -1

Dumps the last 15 lines out to a file and opens it in vim. You can then remove all the other junk and close the file, then it executes those commands.

Quick way to make a bash script out of your history…wonder if there’s a way to use it during command injection attacks to craft payloads.

SSH section moved to SSH

Directory Stack

If you want to change directories without losing your current directory, you can utilize the directory stack. Information about the following three commands can be found in the builtins manual page.

NOTE

Flags for the commands detailed below are bash-specific. Given that these are shell built-ins, check the manual pages for your specific shell. For instance, check the zshbuiltins if you use ZSH. Directory stack commands without flags should function comparably between shells, so it shouldn’t be an issue until you want to get fancy.

Listing and Clearing Stack

To view or clear all stack listings, use the dirs command.

dirs       # list directory stack
dirs -c    # clear entire stack
dirs -l    # list directory stack with full pathnames (i.e. expanded home directory)
dirs -p    # print directory stack, one entry per line
dirs -v    # print directory stack, one entry per line, with index number
dirs +5    # Display 5th stack element (from the left, starting with 0)
dirs -5    # Display 5th stack element (from the right, starting with 0)

Push Stack

Stack pushing operations are done with the pushd command.

pushd        # Swap top two directories on stack (error if stack is empty)
pushd -n     # Change directory stack without changing current directory
pushd +5     # Rotate the stack so the 5th element (from the left, starting with 0) is at the top
pushd -5     # Rotate the stack so the 5th element (from the right, starting with 0) is at the top
pushd dir    # Add dir to the top of the directory stack and make it the current working directory

Pop Stack

Stack popping operations are done with the popd command.

popd       # Remove top directory, change working directory to new stack top
popd -n    # Remove directory without changing current directory
popd +5    # Remove 5th element (from the left, starting with 0)
popd -5    # Remove 5th element (from the right, starting with 0)