Working With Files and Directories
Overview
Teaching: 10 min
Exercises: 5 minQuestions
How can I create, copy, and delete files and directories?
How can I edit files?
Objectives
Create a directory hierarchy that matches a given structure.
Create files in that hierarchy using an editor or by copying and renaming existing files.
Delete, copy and move specified files and/or directories.
Creating directories
We now know how to explore files and directories, but how do we create them in the first place?
Step one: see where we are and what we already have
Let’s go back to our data-shell
directory
and use ls -F
to see what it contains:
$ pwd
/homes/kpegion/data-shell
$ ls -F
creatures/ data/ molecules/ north-pacific-gyre/ notes.txt pizza.cfg solar.pdf writing/
Create a directory
Let’s create a new directory called thesis
using the command mkdir thesis
(which has no output):
$ mkdir thesis
As you might guess from its name,
mkdir
means ‘make directory’.
Since thesis
is a relative path
(i.e., does not have a leading slash, like /what/ever/thesis
),
the new directory is created in the current working directory:
$ ls -F
creatures/ data/ molecules/ north-pacific-gyre/ notes.txt pizza.cfg solar.pdf thesis/ writing/
Good names for files and directories
Complicated names of files and directories can make your life painful when working on the command line. Here we provide a few useful tips for the names of your files.
Don’t use spaces.
Spaces can make a name more meaningful, but since spaces are used to separate arguments on the command line it is better to avoid them in names of files and directories. You can use
-
or_
instead (e.g.north-pacific-gyre/
rather thannorth pacific gyre/
).Don’t begin the name with
-
(dash).Commands treat names starting with
-
as options.Stick with letters, numbers,
.
(period or ‘full stop’),-
(dash) and_
(underscore).Many other characters have special meanings on the command line. We will learn about some of these during this lesson. There are special characters that can cause your command to not work as expected and can even result in data loss.
If you need to refer to names of files or directories that have spaces or other special characters, you should surround the name in quotes (
""
).
Since we’ve just created the thesis
directory, there’s nothing in it yet:
$ ls -F thesis
Create a text file
Let’s change our working directory to thesis
using cd
,
then run a text editor called Nano to create a file called draft.txt
:
$ cd thesis
$ nano draft.txt
Which Editor?
When we say, ‘
nano
is a text editor’ we really do mean ‘text’: it can only work with plain character data, not tables, images, or any other human-friendly media. We use it in examples because it is one of the least complex text editors. However, because of this trait, it may not be powerful enough or flexible enough for the work you need to do after this workshop. On Unix systems (such as Linux and macOS), many programmers use Emacs or Vim (both of which require more time to learn), or a graphical editor such as Gedit.No matter what editor you use on COLA, it will search within and save files to your current working directory as its default location.
Let’s type in a few lines of text.
Once we’re happy with our text, in nano
we can press Ctrl+O
(press the Ctrl or Control key and, while
holding it down, press the O key) to write our data to disk
(we’ll be asked what file we want to save this to:
press Return to accept the suggested default of draft.txt
).

Once our file is saved, we can use Ctrl+X to quit the editor and return to the shell.
The Control,
Ctrl
, or^
KeyThe Control key is often labeled the
Ctrl
key on the keyboard. There are various ways in which using the Control key may be described. For example, you may see an instruction to press the Ctrl key and, while holding it down, press the X key, described as any of:
Control-X
Control+X
Ctrl-X
Ctrl+X
^X
C-x
In
nano
, along the bottom of the screen you’ll see^G Get Help ^O WriteOut
. This means that you can useControl-G
to get help andControl-O
to save your file.
nano
doesn’t leave any output on the screen after it exits,
but ls
now shows that we have created a file called draft.txt
:
$ ls
draft.txt
Creating Files a Different Way
We have seen how to create text files using the
nano
editor. Now, try the following command:$ touch my_file.txt
What did the
touch
command do (Hint: usels
)?How large is
my_file.txt
?Solution
The
touch
command generates a new file calledmy_file.txt
in your current directory.When you inspect the file with
ls -l
, note that the size ofmy_file.txt
is 0 bytes. In other words, it contains no data. If you openmy_file.txt
using your text editor it is blank.
What’s In A Name?
You may have noticed that all of the files we are working with are named ‘something dot something’, and in this part of the lesson, we always used the extension
.txt
. This is just a convention: we can call a filemythesis
or almost anything else we want. However, most people use two-part names most of the time to help them (and their programs) tell different kinds of files apart. The second part of such a name is called the filename extension, and indicates what type of data the file holds:.txt
signals a plain text file,.cfg
is a configuration file full of parameters for some program or other,.png
is a PNG image, and so on.This is just a convention, albeit an important one. Files contain bytes: it’s up to us and our programs to interpret those bytes according to the rules for plain text files, PDF documents, configuration files, images, and so on.
Naming a PNG image of a whale as
whale.mp3
doesn’t somehow magically turn it into a recording of whalesong, though it might cause the operating system to try to open it with a music player when someone double-clicks it.
Moving files and directories
Returning to the data-shell
directory,
cd ~/data-shell/
In our thesis
directory we have a file draft.txt
which isn’t a particularly informative name,
so let’s change the file’s name using mv
,
which is short for ‘move’:
$ mv thesis/draft.txt thesis/quotes.txt
The first argument tells mv
what we’re ‘moving’,
while the second is where it’s to go.
In this case,
we’re moving thesis/draft.txt
to thesis/quotes.txt
,
which has the same effect as renaming the file.
Sure enough,
ls
shows us that thesis
now contains one file called quotes.txt
:
$ ls thesis
quotes.txt
One has to be careful when specifying the target file name, since mv
will
silently overwrite any existing file with the same name, which could
lead to data loss. An additional option, mv -i
(or mv --interactive
),
can be used to make mv
ask you for confirmation before overwriting.
Note that mv
also works on directories.
Let’s move quotes.txt
into the current working directory.
We use mv
once again,
but this time we’ll use just the name of a directory as the second argument
to tell mv
that we want to keep the filename,
but put the file somewhere new.
(This is why the command is called ‘move’.)
In this case,
the directory name we use is the special directory name .
that we mentioned earlier.
$ mv thesis/quotes.txt .
The effect is to move the file from the directory it was in to the current working directory.
ls
now shows us that thesis
is empty:
$ ls thesis
Further,
ls
with a filename or directory name as an argument only lists that file or directory.
We can use this to see that quotes.txt
is still in our current directory:
$ ls quotes.txt
quotes.txt
Moving Files to a new folder
After running the following commands, Jamie realizes that she put the files
sucrose.dat
andmaltose.dat
into the wrong folder. The files should have been placed in theraw
folder.$ ls -F analyzed/ raw/ $ ls -F analyzed fructose.dat glucose.dat maltose.dat sucrose.dat $ cd analyzed
Fill in the blanks to move these files to the
raw/
folder (i.e. the one she forgot to put them in)$ mv sucrose.dat maltose.dat ____/____
Solution
$ mv sucrose.dat maltose.dat ../raw
Recall that
..
refers to the parent directory (i.e. one above the current directory) and that.
refers to the current directory.
Copying files and directories
The cp
command works very much like mv
,
except it makes a duplicate copy of a file instead of moving it.
We can check that it did the right thing using ls
with two paths as arguments — like most Unix commands,
ls
can be given multiple paths at once:
$ cp quotes.txt thesis/quotations.txt
$ ls quotes.txt thesis/quotations.txt
quotes.txt thesis/quotations.txt
We can also copy a directory and all its contents by using the
recursive option -r
,
e.g. to back up a directory:
$ cp -r thesis thesis_backup
We can check the result by listing the contents of both the thesis
and thesis_backup
directory:
$ ls thesis thesis_backup
thesis:
quotations.txt
thesis_backup:
quotations.txt
Renaming Files
Suppose that you created a plain-text file in your current directory to contain a list of the statistical tests you will need to do to analyze your data, and named it:
statstics.txt
After creating and saving this file you realize you misspelled the filename! You want to correct the mistake, which of the following commands could you use to do so?
cp statstics.txt statistics.txt
mv statstics.txt statistics.txt
mv statstics.txt .
cp statstics.txt .
Solution
- No. While this would create a file with the correct name, the incorrectly named file still exists in the directory and would need to be deleted.
- Yes, this would work to rename the file.
- No, the period(.) indicates where to move the file, but does not provide a new file name; identical file names cannot be created.
- No, the period(.) indicates where to copy the file, but does not provide a new file name; identical file names cannot be created.
Moving and Copying
What is the output of the closing
ls
command in the sequence shown below?$ pwd
/Users/jamie/data
$ ls
proteins.dat
$ mkdir recombine $ mv proteins.dat recombine/ $ cp recombine/proteins.dat ../proteins-saved.dat $ ls
proteins-saved.dat recombine
recombine
proteins.dat recombine
proteins-saved.dat
Solution
We start in the
/Users/jamie/data
directory, and create a new folder calledrecombine
. The second line moves (mv
) the fileproteins.dat
to the new folder (recombine
). The third line makes a copy of the file we just moved. The tricky part here is where the file was copied to. Recall that..
means ‘go up a level’, so the copied file is now in/Users/jamie
. Notice that..
is interpreted with respect to the current working directory, not with respect to the location of the file being copied. So, the only thing that will show using ls (in/Users/jamie/data
) is the recombine folder.
- No, see explanation above.
proteins-saved.dat
is located at/Users/jamie
- Yes
- No, see explanation above.
proteins.dat
is located at/Users/jamie/data/recombine
- No, see explanation above.
proteins-saved.dat
is located at/Users/jamie
Removing files and directories
Returning to the data-shell
directory,
let’s tidy up this directory by removing the quotes.txt
file we created.
The Unix command we’ll use for this is rm
(short for ‘remove’):
$ rm quotes.txt
We can confirm the file has gone using ls
:
$ ls quotes.txt
ls: cannot access 'quotes.txt': No such file or directory
Deleting Is Forever
The Unix shell doesn’t have a trash bin that we can recover deleted files from (though most graphical interfaces to Unix do). Instead, when we delete files, they are unlinked from the file system so that their storage space on disk can be recycled. Tools for finding and recovering deleted files do exist, but there’s no guarantee they’ll work in any particular situation, since the computer may recycle the file’s disk space right away.
Using
rm
SafelyWhat happens when we execute
rm -i thesis_backup/quotations.txt
? Why would we want this protection when usingrm
?Solution
$ rm: remove regular file 'thesis_backup/quotations.txt'? y
The
-i
option will prompt before (every) removal (use Y to confirm deletion or N to keep the file). By using the-i
option, we have the chance to check that we are deleting only the files that we want to remove. Some organizations’ Unix systems have this option set by default - the COLA system does not! So be careful with therm
command.
If we try to remove the thesis
directory using rm thesis
,
we get an error message:
$ rm thesis
rm: cannot remove `thesis': Is a directory
This happens because rm
by default only works on files, not directories.
rm
can remove a directory and all its contents if we use the
recursive option -r
, and it will do so without any confirmation prompts:
$ rm -r thesis
Given that there is no way to retrieve files deleted using the shell,
rm -r
should be used with great caution (you might consider adding the interactive option rm -r -i
).
To remove an empty directory, you can use the rmdir
command:
$ rmdir thesis
Operations with multiple files and directories
Oftentimes one needs to copy or move several files at once. This can be done by providing a list of individual filenames, or specifying a naming pattern using wildcards.
Copy with Multiple Filenames
For this exercise, you can test the commands in the
data-shell/data
directory.In the example below, what does
cp
do when given several filenames and a directory name?$ mkdir backup $ cp amino-acids.txt animals.txt backup/
In the example below, what does
cp
do when given three or more file names?$ ls -F
amino-acids.txt animals.txt backup/ elements/ morse.txt pdb/ planets.txt salmon.txt sunspot.txt
$ cp amino-acids.txt animals.txt morse.txt
Solution
If given more than one file name followed by a directory name (i.e. the destination directory must be the last argument),
cp
copies the files to the named directory.If given three file names,
cp
throws an error such as the one below, because it is expecting a directory name as the last argument.cp: target ‘morse.txt’ is not a directory
Using wildcards for accessing multiple files at once
Wildcards
*
is a wildcard, which matches zero or more characters. Let’s consider thedata-shell/molecules
directory:*.pdb
matchesethane.pdb
,propane.pdb
, and every file that ends with ‘.pdb’. On the other hand,p*.pdb
only matchespentane.pdb
andpropane.pdb
, because the ‘p’ at the front only matches filenames that begin with the letter ‘p’.
?
is also a wildcard, but it matches exactly one character. So?ethane.pdb
would matchmethane.pdb
whereas*ethane.pdb
matches bothethane.pdb
, andmethane.pdb
.Wildcards can be used in combination with each other e.g.
???ane.pdb
matches three characters followed byane.pdb
, givingcubane.pdb ethane.pdb octane.pdb
.When the shell sees a wildcard, it expands the wildcard to create a list of matching filenames before running the command that was asked for. As an exception, if a wildcard expression does not match any file, Bash will pass the expression as an argument to the command as it is. For example typing
ls *.pdf
in themolecules
directory (which contains only files with names ending with.pdb
) results in an error message that there is no file calledwc
andls
see the lists of file names matching these expressions, but not the wildcards themselves. It is the shell, not the other programs, that deals with expanding wildcards, and this is another example of orthogonal design.There are many other, fancier wildcards as well. For example:
[0-9]
will match only numbers
[a-Z]
will match any letters of either case. Unix alphabetizes as aAbBcCdD..zZ, so the mixed casesa
andZ
are necessary to include all the letters in the range.
[[:lower:]]
will match only lower-case letters
[[:upper:]]
will match only upper case letters
List filenames matching a pattern
When run in the
molecules
directory, whichls
command(s) will produce this output?
ethane.pdb methane.pdb
ls *t*ane.pdb
ls *t?ne.*
ls *t??ne.pdb
ls ethane.*
Solution
The solution is
3.
1.
shows all files whose names contain zero or more characters (*
) followed by the lettert
, then zero or more characters (*
) followed byane.pdb
. This givesethane.pdb methane.pdb octane.pdb pentane.pdb
.
2.
shows all files whose names start with zero or more characters (*
) followed by the lettert
, then a single character (?
), thenne.
followed by zero or more characters (*
). This will give usoctane.pdb
andpentane.pdb
but doesn’t match anything which ends inthane.pdb
.
3.
fixes the problems of option 2 by matching two characters (??
) betweent
andne
. This is the solution.
4.
only shows files starting withethane.
.
More on Wildcards
Sam has a directory containing calibration data, datasets, and descriptions of the datasets:
. ├── 2015-10-23-calibration.txt ├── 2015-10-23-dataset1.txt ├── 2015-10-23-dataset2.txt ├── 2015-10-23-dataset_overview.txt ├── 2015-10-26-calibration.txt ├── 2015-10-26-dataset1.txt ├── 2015-10-26-dataset2.txt ├── 2015-10-26-dataset_overview.txt ├── 2015-11-23-calibration.txt ├── 2015-11-23-dataset1.txt ├── 2015-11-23-dataset2.txt ├── 2015-11-23-dataset_overview.txt ├── backup │ ├── calibration │ └── datasets └── send_to_bob ├── all_datasets_created_on_a_23rd └── all_november_files
Before heading off to another field trip, she wants to back up her data and send some datasets to her colleague Bob. Sam uses the following commands to get the job done:
$ cp *dataset* backup/datasets $ cp ____calibration____ backup/calibration $ cp 2015-____-____ send_to_bob/all_november_files/ $ cp ____ send_to_bob/all_datasets_created_on_a_23rd/
Help Sam by filling in the blanks.
The resulting directory structure should look like this
. ├── 2015-10-23-calibration.txt ├── 2015-10-23-dataset1.txt ├── 2015-10-23-dataset2.txt ├── 2015-10-23-dataset_overview.txt ├── 2015-10-26-calibration.txt ├── 2015-10-26-dataset1.txt ├── 2015-10-26-dataset2.txt ├── 2015-10-26-dataset_overview.txt ├── 2015-11-23-calibration.txt ├── 2015-11-23-dataset1.txt ├── 2015-11-23-dataset2.txt ├── 2015-11-23-dataset_overview.txt ├── backup │ ├── calibration │ │ ├── 2015-10-23-calibration.txt │ │ ├── 2015-10-26-calibration.txt │ │ └── 2015-11-23-calibration.txt │ └── datasets │ ├── 2015-10-23-dataset1.txt │ ├── 2015-10-23-dataset2.txt │ ├── 2015-10-23-dataset_overview.txt │ ├── 2015-10-26-dataset1.txt │ ├── 2015-10-26-dataset2.txt │ ├── 2015-10-26-dataset_overview.txt │ ├── 2015-11-23-dataset1.txt │ ├── 2015-11-23-dataset2.txt │ └── 2015-11-23-dataset_overview.txt └── send_to_bob ├── all_datasets_created_on_a_23rd │ ├── 2015-10-23-dataset1.txt │ ├── 2015-10-23-dataset2.txt │ ├── 2015-10-23-dataset_overview.txt │ ├── 2015-11-23-dataset1.txt │ ├── 2015-11-23-dataset2.txt │ └── 2015-11-23-dataset_overview.txt └── all_november_files ├── 2015-11-23-calibration.txt ├── 2015-11-23-dataset1.txt ├── 2015-11-23-dataset2.txt └── 2015-11-23-dataset_overview.txt
Solution
$ cp *calibration.txt backup/calibration $ cp 2015-11-* send_to_bob/all_november_files/ $ cp *-23-dataset* send_to_bob/all_datasets_created_on_a_23rd/
Organizing Directories and Files
Jamie is working on a project and she sees that her files aren’t very well organized:
$ ls -F
analyzed/ fructose.dat raw/ sucrose.dat
The
fructose.dat
andsucrose.dat
files contain output from her data analysis. What command(s) covered in this lesson does she need to run so that the commands below will produce the output shown?$ ls -F
analyzed/ raw/
$ ls analyzed
fructose.dat sucrose.dat
Solution
mv *.dat analyzed
Jamie needs to move her files
fructose.dat
andsucrose.dat
to theanalyzed
directory. The shell will expand *.dat to match all .dat files in the current directory. Themv
command then moves the list of .dat files to the ‘analyzed’ directory.
Reproduce a folder structure
You’re starting a new experiment, and would like to duplicate the directory structure from your previous experiment so you can add new data.
Assume that the previous experiment is in a folder called ‘2016-05-18’, which contains a
data
folder that in turn contains folders namedraw
andprocessed
that contain data files. The goal is to copy the folder structure of the2016-05-18-data
folder into a folder called2016-05-20
so that your final directory structure looks like this:2016-05-20/ └── data ├── processed └── raw
Which of the following set of commands would achieve this objective? What would the other commands do?
$ mkdir 2016-05-20 $ mkdir 2016-05-20/data $ mkdir 2016-05-20/data/processed $ mkdir 2016-05-20/data/raw
$ mkdir 2016-05-20 $ cd 2016-05-20 $ mkdir data $ cd data $ mkdir raw processed
$ mkdir 2016-05-20/data/raw $ mkdir 2016-05-20/data/processed
$ mkdir 2016-05-20 $ cd 2016-05-20 $ mkdir data $ mkdir raw processed
Solution
The first two sets of commands achieve this objective. The first set uses relative paths to create the top level directory before the subdirectories.
The third set of commands will give an error because
mkdir
won’t create a subdirectory of a non-existant directory: the intermediate level folders must be created first.The final set of commands generates the ‘raw’ and ‘processed’ directories at the same level as the ‘data’ directory.
Key Points
cp old new
copies a file.
mkdir path
creates a new directory.
mv old new
moves (renames) a file or directory.
rm path
removes (deletes) a file.
*
matches zero or more characters in a filename, so*.txt
matches all files ending in.txt
.
?
matches any single character in a filename, so?.txt
matchesa.txt
but notany.txt
.Use of the Control key may be described in many ways, including
Ctrl-X
,Control-X
, and^X
.The shell does not have a trash bin: once something is deleted, it’s really gone.
Most files’ names are
something.extension
. The extension isn’t required, and doesn’t guarantee anything, but is normally used to indicate the type of data in the file.Depending on the type of work you do, you may need a more powerful text editor than Nano.
Redirects, Pipes and Filters
Overview
Teaching: 15 min
Exercises: 5 minQuestions
How can I combine existing commands to do new things?
Objectives
Redirect a command’s output to a file.
Process a file instead of keyboard input using redirection.
Construct command pipelines with two or more stages.
Explain what usually happens if a program or pipeline isn’t given any input to process.
Explain Unix’s ‘small pieces, loosely joined’ philosophy.
Now that we know a few basic commands,
we can finally look at the shell’s most powerful feature:
the ease with which it lets us combine existing programs in new ways.
We’ll start with the directory called data-shell/molecules
that contains six files describing some simple organic molecules.
The .pdb
extension indicates that these files are in Protein Data Bank format,
a simple text format that specifies the type and position of each atom in the molecule.
$ ls molecules
cubane.pdb ethane.pdb methane.pdb
octane.pdb pentane.pdb propane.pdb
Let’s go into that directory with cd
and run an example command wc cubane.pdb
:
$ cd molecules
$ wc cubane.pdb
20 156 1158 cubane.pdb
wc
is the ‘word count’ command:
it counts the number of lines, words, and characters in files (from left to right, in that order).
If we run the command wc *.pdb
, the *
in *.pdb
matches zero or more characters,
so the shell turns *.pdb
into a list of all .pdb
files in the current directory:
$ wc *.pdb
20 156 1158 cubane.pdb
12 84 622 ethane.pdb
9 57 422 methane.pdb
30 246 1828 octane.pdb
21 165 1226 pentane.pdb
15 111 825 propane.pdb
107 819 6081 total
Note that wc *.pdb
also shows the total number of all lines in the last line of the output.
If we run wc -l
instead of just wc
,
the output shows only the number of lines per file:
$ wc -l *.pdb
20 cubane.pdb
12 ethane.pdb
9 methane.pdb
30 octane.pdb
21 pentane.pdb
15 propane.pdb
107 total
The -m
and -w
options can also be used with the wc
command, to show
only the number of characters or the number of words in the files.
Why Isn’t It Doing Anything?
What happens if a command is supposed to process a file, but we don’t give it a filename? For example, what if we type:
$ wc -l
but don’t type
*.pdb
(or anything else) after the command? Since it doesn’t have any filenames,wc
assumes it is supposed to process input given at the command prompt, so it just sits there and waits for us to give it some data interactively. From the outside, though, all we see is it sitting there: the command doesn’t appear to do anything.If you make this kind of mistake, you can escape out of this state by holding down the control key (Ctrl) and typing the letter C once and letting go of the Ctrl key. Ctrl+C
Capturing output from commands
Which of these files contains the fewest lines? It’s an easy question to answer when there are only six files, but what if there were 6000? Our first step toward a solution is to run the command:
$ wc -l *.pdb > lengths.txt
The greater than symbol, >
, tells the shell to redirect the command’s output
to a file instead of printing it to the screen. (This is why there is no screen output:
everything that wc
would have printed has gone into the
file lengths.txt
instead.) The shell will create
the file if it doesn’t exist. If the file exists, it will be
silently overwritten, which may lead to data loss and thus requires
some caution.
ls lengths.txt
confirms that the file exists:
$ ls lengths.txt
lengths.txt
We can now send the content of lengths.txt
to the screen using cat lengths.txt
.
The cat
command gets its name from ‘concatenate’ i.e. join together,
and it prints the contents of files one after another.
There’s only one file in this case,
so cat
just shows us what it contains:
$ cat lengths.txt
20 cubane.pdb
12 ethane.pdb
9 methane.pdb
30 octane.pdb
21 pentane.pdb
15 propane.pdb
107 total
Output Page by Page
We’ll continue to use
cat
in this lesson, for convenience and consistency, but it has the disadvantage that it always dumps the whole file onto your screen. More useful in practice is the commandless
, which you use withless lengths.txt
. This displays a screenful of the file, and then stops. You can go forward one screenful by pressing the spacebar, or back one by pressingb
. Pressq
to quit.
Filtering output
Next we’ll use the sort
command to sort the contents of the lengths.txt
file.
But first we’ll use an exercise to learn a little about the sort command:
What Does
sort -n
Do?If we run
sort
on a file containing the following lines:10 2 19 22 6
the output is:
10 19 2 22 6
If we run
sort -n
on the same input, we get this instead:2 6 10 19 22
Explain why
-n
has this effect.Solution
The
-n
option specifies a numerical rather than an alphanumerical sort.
We will also use the -n
option to specify that the sort is
numerical instead of alphanumerical.
This does not change the file;
instead, it sends the sorted result to the screen:
$ sort -n lengths.txt
9 methane.pdb
12 ethane.pdb
15 propane.pdb
20 cubane.pdb
21 pentane.pdb
30 octane.pdb
107 total
We can put the sorted list of lines in another temporary file called sorted-lengths.txt
by putting > sorted-lengths.txt
after the command,
just as we used > lengths.txt
to put the output of wc
into lengths.txt
.
Once we’ve done that,
we can run another command called head
to get the first few lines in sorted-lengths.txt
:
$ sort -n lengths.txt > sorted-lengths.txt
$ head -n 1 sorted-lengths.txt
9 methane.pdb
Using -n 1
with head
tells it that
we only want the first line of the file;
-n 20
would get the first 20,
and so on.
Since sorted-lengths.txt
contains the lengths of our files ordered from least to greatest,
the output of head
must be the file with the fewest lines.
Redirecting to the same file
It’s a very bad idea to try redirecting the output of a command that operates on a file to the same file. For example:
$ sort -n lengths.txt > lengths.txt
Doing something like this may give you incorrect results and/or delete the contents of
lengths.txt
.
What Does
>>
Mean?We have seen the use of
>
, but there is a similar operator>>
which works slightly differently. We’ll learn about the differences between these two operators by printing some strings. We can use theecho
command to print strings e.g.$ echo The echo command prints text
The echo command prints text
Now test the commands below to reveal the difference between the two operators:
$ echo hello > testfile01.txt
and:
$ echo hello >> testfile02.txt
Hint: Try executing each command twice in a row and then examining the output files.
Solution
In the first example with
>
, the string ‘hello’ is written totestfile01.txt
, but the file gets overwritten each time we run the command.We see from the second example that the
>>
operator also writes ‘hello’ to a file (in this casetestfile02.txt
), but appends the string to the file if it already exists (i.e. when we run it for the second time).
Appending Data
We have already met the
head
command, which prints lines from the start of a file.tail
is similar, but prints lines from the end of a file instead.Consider the file
data-shell/data/animals.txt
. After these commands, select the answer that corresponds to the fileanimals-subset.txt
:$ head -n 3 animals.txt > animals-subset.txt $ tail -n 2 animals.txt >> animals-subset.txt
- The first three lines of
animals.txt
- The last two lines of
animals.txt
- The first three lines and the last two lines of
animals.txt
- The second and third lines of
animals.txt
Solution
Option 3 is correct. For option 1 to be correct we would only run the
head
command. For option 2 to be correct we would only run thetail
command. For option 4 to be correct we would have to pipe the output ofhead
intotail -n 2
by doinghead -n 3 animals.txt | tail -n 2 > animals-subset.txt
Passing output to another command
In our example of finding the file with the fewest lines,
we are using two intermediate files lengths.txt
and sorted-lengths.txt
to store output.
This is a confusing way to work because
even once you understand what wc
, sort
, and head
do,
those intermediate files make it hard to follow what’s going on.
We can make it easier to understand by running sort
and head
together:
$ sort -n lengths.txt | head -n 1
9 methane.pdb
The vertical bar, |
, between the two commands is called a pipe.
It tells the shell that we want to use
the output of the command on the left
as the input to the command on the right.
This has removed the need for the sorted-lengths.txt
file.
Combining multiple commands
Nothing prevents us from chaining pipes consecutively.
We can for example send the output of wc
directly to sort
,
and then the resulting output to head
.
This removes the need for any intermediate files.
We’ll start by using a pipe to send the output of wc
to sort
:
$ wc -l *.pdb | sort -n
9 methane.pdb
12 ethane.pdb
15 propane.pdb
20 cubane.pdb
21 pentane.pdb
30 octane.pdb
107 total
We can then send that output through another pipe, to head
, so that the full pipeline becomes:
$ wc -l *.pdb | sort -n | head -n 1
9 methane.pdb
This is exactly like a mathematician nesting functions like log(3x)
and saying ‘the log of three times x’.
In our case,
the calculation is ‘head of sort of line count of *.pdb
’.
The redirection and pipes used in the last few commands are illustrated below:
Piping Commands Together
In our current directory, we want to find the 3 files which have the least number of lines. Which command listed below would work?
wc -l * > sort -n > head -n 3
wc -l * | sort -n | head -n 1-3
wc -l * | head -n 3 | sort -n
wc -l * | sort -n | head -n 3
Solution
Option 4 is the solution. The pipe character
|
is used to connect the output from one command to the input of another.>
is used to redirect standard output to a file. Try it in thedata-shell/molecules
directory!
Tools designed to work together
This idea of linking programs together is why Unix has been so successful.
Instead of creating enormous programs that try to do many different things,
Unix programmers focus on creating lots of simple tools that each do one job well,
and that work well with each other.
This programming model is called ‘pipes and filters’.
We’ve already seen pipes;
a filter is a program like wc
or sort
that transforms a stream of input into a stream of output.
Almost all of the standard Unix tools can work this way:
unless told to do otherwise,
they read from standard input,
do something with what they’ve read,
and write to standard output.
The key is that any program that reads lines of text from standard input and writes lines of text to standard output can be combined with every other program that behaves this way as well. You can and should write your programs this way so that you and other people can put those programs into pipes to multiply their power.
In the interest of time we have to end this episode here but if you are eager to work through some example exercises follow the rest of this lesson here
Key Points
wc
counts lines, words, and characters in its inputs.
cat
displays the contents of its inputs.
sort
sorts its inputs.
head
displays the first 10 lines of its input.
tail
displays the last 10 lines of its input.
command > [file]
redirects a command’s output to a file (overwriting any existing content).
command >> [file]
appends a command’s output to a file.
[first] | [second]
is a pipeline: the output of the first command is used as the input to the second.The best way to use the shell is to use pipes to combine simple single-purpose programs (filters).
Shell Scripts
Overview
Teaching: 15 min
Exercises: 5 minQuestions
How can I save and re-use commands?
Objectives
Write a shell script that runs a command or series of commands for a fixed set of files.
Run a shell script from the command line.
Write a shell script that operates on a set of files defined by the user on the command line.
Create pipelines that include shell scripts you, and others, have written.
We are finally ready to see what makes the shell such a powerful programming environment. We are going to take the commands we repeat frequently and save them in files so that we can re-run all those operations again later by typing a single command. For historical reasons, a bunch of commands saved in a file is usually called a shell script, but make no mistake: these are actually small programs.
Not only will writing shell scripts make your work faster– you won’t have to retype the same commands over and over again– it will also make it more accurate (fewer chances for typos) and more reproducible. If you come back to your work later (or if someone else finds your work and wants to build on it) you will be able to reproduce the same results simply by running your script, rather than having to remember or retype a long list of commands.
Let’s start by going back to molecules/
and creating a new file, middle.sh
which will
become our shell script:
$ cd molecules
$ nano middle.sh
The command nano middle.sh
opens the file middle.sh
within the text editor ‘nano’
(which runs within the shell).
If the file does not exist, it will be created.
We can use the text editor to directly edit the file – we’ll simply insert the following line:
head -n 15 octane.pdb | tail -n 5
This is a variation on the pipe we constructed earlier:
it selects lines 11-15 of the file octane.pdb
.
Remember, we are not running it as a command just yet:
we are putting the commands in a file.
Then we save the file (Ctrl-O
in nano),
and exit the text editor (Ctrl-X
in nano).
Check that the directory molecules
now contains a file called middle.sh
.
Once we have saved the file,
we can ask the shell to execute the commands it contains.
Our shell is called bash
, so we run the following command:
$ bash middle.sh
ATOM 9 H 1 -4.502 0.681 0.785 1.00 0.00
ATOM 10 H 1 -5.254 -0.243 -0.537 1.00 0.00
ATOM 11 H 1 -4.357 1.252 -0.895 1.00 0.00
ATOM 12 H 1 -3.009 -0.741 -1.467 1.00 0.00
ATOM 13 H 1 -3.172 -1.337 0.206 1.00 0.00
Sure enough, our script’s output is exactly what we would get if we ran that pipeline directly.
Text vs. Whatever
We usually call programs like Microsoft Word or LibreOffice Writer “text editors”, but we need to be a bit more careful when it comes to programming. By default, Microsoft Word uses
.docx
files to store not only text, but also formatting information about fonts, headings, and so on. This extra information isn’t stored as characters and doesn’t mean anything to tools likehead
: they expect input files to contain nothing but the letters, digits, and punctuation on a standard computer keyboard. When editing programs, therefore, you must either use a plain text editor, or be careful to save files as plain text.
What if we want to select lines from an arbitrary file?
We could edit middle.sh
each time to change the filename,
but that would probably take longer than typing the command out again
in the shell and executing it with a new file name.
Instead, let’s edit middle.sh
and make it more versatile:
$ nano middle.sh
Now, within “nano”, replace the text octane.pdb
with the special variable called $1
:
head -n 15 "$1" | tail -n 5
Inside a shell script,
$1
means ‘the first filename (or other argument) on the command line’.
We can now run our script like this:
$ bash middle.sh octane.pdb
ATOM 9 H 1 -4.502 0.681 0.785 1.00 0.00
ATOM 10 H 1 -5.254 -0.243 -0.537 1.00 0.00
ATOM 11 H 1 -4.357 1.252 -0.895 1.00 0.00
ATOM 12 H 1 -3.009 -0.741 -1.467 1.00 0.00
ATOM 13 H 1 -3.172 -1.337 0.206 1.00 0.00
or on a different file like this:
$ bash middle.sh pentane.pdb
ATOM 9 H 1 1.324 0.350 -1.332 1.00 0.00
ATOM 10 H 1 1.271 1.378 0.122 1.00 0.00
ATOM 11 H 1 -0.074 -0.384 1.288 1.00 0.00
ATOM 12 H 1 -0.048 -1.362 -0.205 1.00 0.00
ATOM 13 H 1 -1.183 0.500 -1.412 1.00 0.00
Double-Quotes Around Arguments
For the same reason that we put the loop variable inside double-quotes, in case the filename happens to contain any spaces, we surround
$1
with double-quotes.
Currently, we need to edit middle.sh
each time we want to adjust the range of
lines that is returned.
Let’s fix that by configuring our script to instead use three command-line arguments.
After the first command-line argument ($1
), each additional argument that we
provide will be accessible via the special variables $1
, $2
, $3
,
which refer to the first, second, third command-line arguments, respectively.
Knowing this, we can use additional arguments to define the range of lines to
be passed to head
and tail
respectively:
$ nano middle.sh
head -n "$2" "$1" | tail -n "$3"
We can now run:
$ bash middle.sh pentane.pdb 15 5
ATOM 9 H 1 1.324 0.350 -1.332 1.00 0.00
ATOM 10 H 1 1.271 1.378 0.122 1.00 0.00
ATOM 11 H 1 -0.074 -0.384 1.288 1.00 0.00
ATOM 12 H 1 -0.048 -1.362 -0.205 1.00 0.00
ATOM 13 H 1 -1.183 0.500 -1.412 1.00 0.00
By changing the arguments to our command we can change our script’s behaviour:
$ bash middle.sh pentane.pdb 20 5
ATOM 14 H 1 -1.259 1.420 0.112 1.00 0.00
ATOM 15 H 1 -2.608 -0.407 1.130 1.00 0.00
ATOM 16 H 1 -2.540 -1.303 -0.404 1.00 0.00
ATOM 17 H 1 -3.393 0.254 -0.321 1.00 0.00
TER 18 1
This works,
but it may take the next person who reads middle.sh
a moment to figure out what it does.
We can improve our script by adding some comments at the top:
$ nano middle.sh
# Select lines from the middle of a file.
# Usage: bash middle.sh filename end_line num_lines
head -n "$2" "$1" | tail -n "$3"
A comment starts with a #
character and runs to the end of the line.
The computer ignores comments,
but they’re invaluable for helping people (including your future self) understand and use scripts.
The only caveat is that each time you modify the script,
you should check that the comment is still accurate:
an explanation that sends the reader in the wrong direction is worse than none at all.
What if we want to process many files in a single pipeline?
For example, if we want to sort our .pdb
files by length, we would type:
$ wc -l *.pdb | sort -n
because wc -l
lists the number of lines in the files
(recall that wc
stands for ‘word count’, adding the -l
option means ‘count lines’ instead)
and sort -n
sorts things numerically.
We could put this in a file,
but then it would only ever sort a list of .pdb
files in the current directory.
If we want to be able to get a sorted list of other kinds of files,
we need a way to get all those names into the script.
We can’t use $1
, $2
, and so on
because we don’t know how many files there are.
Instead, we use the special variable $@
,
which means,
‘All of the command-line arguments to the shell script’.
We also should put $@
inside double-quotes
to handle the case of arguments containing spaces
("$@"
is special syntax and is equivalent to "$1"
"$2"
…).
Here’s an example:
$ nano sorted.sh
# Sort files by their length.
# Usage: bash sorted.sh one_or_more_filenames
wc -l "$@" | sort -n
$ bash sorted.sh *.pdb ../creatures/*.dat
9 methane.pdb
12 ethane.pdb
15 propane.pdb
20 cubane.pdb
21 pentane.pdb
30 octane.pdb
163 ../creatures/basilisk.dat
163 ../creatures/minotaur.dat
163 ../creatures/unicorn.dat
596 total
Suppose we have just run a series of commands that did something useful — for example, that created a graph we’d like to use in a paper. We’d like to be able to re-create the graph later if we need to, so we want to save the commands in a file. Instead of typing them in again (and potentially getting them wrong) we can do this:
$ history | tail -n 5 > redo-figure-3.sh
The file redo-figure-3.sh
now contains:
297 bash goostats.sh NENE01729B.txt stats-NENE01729B.txt
298 bash goodiff.sh stats-NENE01729B.txt /data/validated/01729.txt > 01729-differences.txt
299 cut -d ',' -f 2-3 01729-differences.txt > 01729-time-series.txt
300 ygraph --format scatter --color bw --borders none 01729-time-series.txt figure-3.png
301 history | tail -n 5 > redo-figure-3.sh
After a moment’s work in an editor to remove the serial numbers on the commands,
and to remove the final line where we called the history
command,
we have a completely accurate record of how we created that figure.
Why Record Commands in the History Before Running Them?
If you run the command:
$ history | tail -n 5 > recent.sh
the last command in the file is the
history
command itself, i.e., the shell has addedhistory
to the command log before actually running it. In fact, the shell always adds commands to the log before running them. Why do you think it does this?Solution
If a command causes something to crash or hang, it might be useful to know what that command was, in order to investigate the problem. Were the command only be recorded after running it, we would not have a record of the last command run in the event of a crash.
In practice, most people develop shell scripts by running commands at the shell prompt a few times
to make sure they’re doing the right thing,
then saving them in a file for re-use.
This style of work allows people to recycle
what they discover about their data and their workflow with one call to history
and a bit of editing to clean up the output
and save it as a shell script.
In the interest of time we have to end this episode here but if you are eager to work through some example exercises follow the rest of this lesson here
Key Points
Save commands in files (usually called shell scripts) for re-use.
bash [filename]
runs the commands saved in a file.
$@
refers to all of a shell script’s command-line arguments.
$1
,$2
, etc., refer to the first command-line argument, the second command-line argument, etc.Place variables in quotes if the values might have spaces in them.
Letting users decide what files to process is more flexible and more consistent with built-in Unix commands.
.bashrc and aliases
Overview
Teaching: 5 min
Exercises: 5 minQuestions
How do I modify the .bashrc file?
Objectives
Customize your bash experience.
Define aliases to save you time and typing.
.bashrc
The bash shell allows for a great deal of customization including defining shortcuts for frequently used commands.
Such preferences are defined in a file in your home directory called .bashrc
, which is a shell script.
It’s used to save and load your terminal preferences and environmental variables.
In order to load your preferences, bash runs the contents of the .bashrc
file at each launch.
Some applications will modify your .bashrc
file when they are installed or initiated.
For instance, if you use Anaconda to manage the installation of personal Python or R libraries on your COLA account,
it will add some scripting code to your .bashrc
file so that it starts up properly when you login.
Let’s take a look at the contents of your .bashrc
file. Go to your home directory…
$ cd
$ ls -l .bashrc
-rw------- 1 jdoe123 users 520 Aug 21 13:16 .bashrc
You will see something like the result above.
A bit about the information shown when you perform a verbose file listing (i.e., using the -l
option):
-
The first ten characters tell you about the nature and permissions of object listed. Permissions are defined at three levels.
- First character tells what the object is.
-
means it is a filed
is a directoryl
is a link- There are other possibilities here, but these 3 are the most likely ones you will encounter.
- The next 9 characters are 3 sets of 3 that each have the sequence
rwx
and describe the permissions.r
means readable (its contents can be viewed)w
means writable (its contents can be edited and changed)x
means executable (it is allowed to run on the computer as a stand-alone program)-
means it is not whichever above.
- The 3 sets of 3 are, in order from left to right:
- The permissions for the user that owns the file (who created it, or this case ownership was assigned when the account was created).
- The permissions for any member of the "group" that owns the file (the user is a member of this group).
- The permissions for anyone who has an account on this computer.
- In this example,
-rw-------
means this is a file that only the owner can read and write. No one else would be able to view or change the contents of this file.
A user can change the permissions of any file or directory they own.
- First character tells what the object is.
-
1
tells the number of disk blocks occupied, largely irrelevant but can be a useful cue on directories as an indicator of how much data is stored there. -
jdoe123
is the username of the owner. This is the person who can change file permissions. -
users
here is the group name. A user may belong to multiple groups (e.g., different groups can be set up for different projects with different members), but a directory or file can only be owned by one group, just as it can only be owned by one user. -
520
is the size of the file in bytes. -
Aug 21 13:16
is the time the file was last altered and saved. After about 6 months without any changes, the timestamp disappears and is replaced by the year. -
.bashrc
is the file name.
Note that files with names starting with a period are usually system files and are, by default, “hidden”. They will not show up with the ls
command unless the -a
option is used or the file is explicitly named as we did above.
Adding aliases to .bashrc
Edit .bashrc
with your preferred editor.
Let’s add some aliases. An alias is a command name that is defined to execute another command, set of commands (e.g., using pipes), or execute a script. You can also redefine an existing command name to have a different behavior, e.g., to make certain command options act as the defaults when they ordinarily are not. For example, you could redefine the rm
command with the alias rm -i
so that typing simply rm
would always ask for confirmation before deleting files.
You could add the following lines at end of the .bashrc. file, which will use the
alias` command to define aliases:
alias ls="ls -qx --color=always"
alias ll="ls -al --color=always"
alias lt="ls -alt --color=always | head -10"
The alias
command defines a command as the name on the left side of =
everything that is in the quotation marks on the right, which enclose the command(s) as you would have typed it on the command line. Be sure there are no spaces on either side of =
. This is a quirk of the bash
language protocol.
-
The first
alias
command redefines the default settings for the commandls
so that it won’t try to print unprintable characters in filenames (-q
), it switches the way alphabetization is done, so it lists alphabetically across the columns on each line instead of down the columns (x
) and uses colors to highlight the different kinds of files, directories and permissions (much like whatls -F
accomplished with symbols). -
The second line defines a new command
ll
that give a “long listing” of the directory contents (-l
) including the hidden files (-a
), also using colors. -
The third defines
lt
to give a long listing of the last 10 files or directories to have been changed. This actually uses two commands and pipes the result of thels
command into thehead
command using the pipe|
. Pipes are a powerful way to chain commands together into sophisticated operations.
Save the file. The changes will not take effect in your current shell until you re-execute the commands in the .bashrc
file. You do this with the source
command (Note that a logging out and back in again will also achieve this because bash runs the contents of the .bashrc
file at each launch):
$ source .bashrc
Now try the commands and see how they look!
$ ls
To see a list of the aliases you have defined, use the alias
command with no arguments:
$ alias
alias ls='ls -qx --color=always'
alias ll='ls -al --color=always'
alias lt='ls -alt --color=always | head -10'
You can also define an alias by directly typing its definition on the command line. But once you log out (or if your connection is dropped), the alias definition would disappear.
You can make many customizations in addition to defining aliases. You can define the list directories that are in your default PATH when conducting searches for executables, change the way your command line prompt appears, or preload software libraries, to name a few.
Key Points
Unix shells can be launched in a customized way with the user’s preferences.
Aliases can be defined that substitute short strings for long or complex commands.
Finding Things
Overview
Teaching: 25 min
Exercises: 0 (DOYO) minQuestions
How can I find files?
How can I find things in files?
Objectives
Use
grep
to select lines from text files that match simple patterns.Use
find
to find files and directories whose names match simple patterns.Use the output of one command as the command-line argument(s) to another command.
Explain what is meant by ‘text’ and ‘binary’ files, and why many common tools don’t handle the latter well.
grep
In the same way that many of us now use ‘Google’ as a verb meaning ‘to find’, Unix programmers often use the word ‘grep’. ‘grep’ is a contraction of ‘global/regular expression/print’, a common sequence of operations in early Unix text editors. It is also the name of a very useful command-line program.
grep
finds and prints lines in files that match a pattern.
For our examples,
we will use a file that contains three haikus taken from a
1998 competition in Salon magazine. For this set of examples,
we’re going to be working in the writing subdirectory:
$ cd
$ cd Desktop/data-shell/writing
$ cat haiku.txt
The Tao that is seen
Is not the true Tao, until
You bring fresh toner.
With searching comes loss
and the presence of absence:
"My Thesis" not found.
Yesterday it worked
Today it is not working
Software is like that.
Let’s find lines that contain the word ‘not’:
$ grep not haiku.txt
Is not the true Tao, until
"My Thesis" not found
Today it is not working
Here, not
is the pattern we’re searching for. The grep command searches through the file, looking for matches to the pattern specified. To use it type grep
, then the pattern we’re searching for and finally the name of the file (or files) we’re searching in.
The output is the three lines in the file that contain the letters ‘not’.
By default, grep searches for a pattern in a case-sensitive way. In addition, the search pattern we have selected does not have to form a complete word, as we will see in the next example.
Let’s search for the pattern: ‘The’.
$ grep The haiku.txt
The Tao that is seen
"My Thesis" not found.
This time, two lines that include the letters ‘The’ are outputted, one of which contained our search pattern within a larger word, ‘Thesis’.
To restrict matches to lines containing the word ‘The’ on its own,
we can give grep
with the -w
option.
This will limit matches to word boundaries.
Later in this lesson, we will also see how we can change the search behavior of grep with respect to its case sensitivity.
$ grep -w The haiku.txt
The Tao that is seen
Note that a ‘word boundary’ includes the start and end of a line, so not
just letters surrounded by spaces.
Sometimes we don’t
want to search for a single word, but a phrase. This is also easy to do with
grep
by putting the phrase in quotes.
$ grep -w "is not" haiku.txt
Today it is not working
We’ve now seen that you don’t have to have quotes around single words, but it is useful to use quotes when searching for multiple words. It also helps to make it easier to distinguish between the search term or phrase and the file being searched. We will use quotes in the remaining examples.
Another useful option is -n
, which numbers the lines that match:
$ grep -n "it" haiku.txt
5:With searching comes loss
9:Yesterday it worked
10:Today it is not working
Here, we can see that lines 5, 9, and 10 contain the letters ‘it’.
We can combine options (i.e. flags) as we do with other Unix commands.
For example, let’s find the lines that contain the word ‘the’. We can combine
the option -w
to find the lines that contain the word ‘the’ and -n
to number the lines that match:
$ grep -nw "the" haiku.txt
2:Is not the true Tao, until
6:and the presence of absence:
Now we want to use the option -i
to make our search case-insensitive (i.e., ignore case):
$ grep -nwi "the" haiku.txt
1:The Tao that is seen
2:Is not the true Tao, until
6:and the presence of absence:
Now, we want to use the option -v
to invert our search, i.e., we want to output
the lines that do not contain the word ‘the’.
$ grep -nwv "the" haiku.txt
1:The Tao that is seen
3:You bring fresh toner.
4:
5:With searching comes loss
7:"My Thesis" not found.
8:
9:Yesterday it worked
10:Today it is not working
11:Software is like that.
If we use the -r
(recursive) option,
grep
can search for a pattern recursively through a set of files in subdirectories.
Let’s search recursively for Yesterday
in the data-shell/writing
directory:
$ grep -r Yesterday .
data/LittleWomen.txt:"Yesterday, when Aunt was asleep and I was trying to be as still as a
data/LittleWomen.txt:Yesterday at dinner, when an Austrian officer stared at us and then
data/LittleWomen.txt:Yesterday was a quiet day spent in teaching, sewing, and writing in my
haiku.txt:Yesterday it worked
grep
has lots of other options. To find out what they are, we can type:
$ grep --help
Usage: grep [OPTION]... PATTERN [FILE]...
Search for PATTERN in each FILE or standard input.
PATTERN is, by default, a basic regular expression (BRE).
Example: grep -i 'hello world' menu.h main.c
Regexp selection and interpretation:
-E, --extended-regexp PATTERN is an extended regular expression (ERE)
-F, --fixed-strings PATTERN is a set of newline-separated fixed strings
-G, --basic-regexp PATTERN is a basic regular expression (BRE)
-P, --perl-regexp PATTERN is a Perl regular expression
-e, --regexp=PATTERN use PATTERN for matching
-f, --file=FILE obtain PATTERN from FILE
-i, --ignore-case ignore case distinctions
-w, --word-regexp force PATTERN to match only whole words
-x, --line-regexp force PATTERN to match only whole lines
-z, --null-data a data line ends in 0 byte, not newline
Miscellaneous:
... ... ...
Using
grep
Which command would result in the following output:
and the presence of absence:
grep "of" haiku.txt
grep -E "of" haiku.txt
grep -w "of" haiku.txt
grep -i "of" haiku.txt
Solution
The correct answer is 3, because the
-w
option looks only for whole-word matches. The other options will also match ‘of’ when part of another word.
Wildcards
grep
’s real power doesn’t come from its options, though; it comes from the fact that patterns can include wildcards. (The technical name for these is regular expressions (sometimes called regexp or regex), which is what the ‘re’ in ‘grep’ stands for.) Regular expressions are both complex and powerful; if you want to do complex searches, please look at the lesson on our website. As a taster, we can find lines that have an ‘o’ in the second position like this:$ grep -E "^.o" haiku.txt
You bring fresh toner. Today it is not working Software is like that.
We use the
-E
option and put the pattern in quotes to prevent the shell from trying to interpret it. (If the pattern contained a*
, for example, the shell would try to expand it before runninggrep
.) The^
in the pattern anchors the match to the start of the line. The.
matches a single character (just like?
in the shell), while theo
matches an actual ‘o’.
Tracking a Species
Leah has several hundred data files saved in one directory, each of which is formatted like this:
2013-11-05,deer,5 2013-11-05,rabbit,22 2013-11-05,raccoon,7 2013-11-06,rabbit,19 2013-11-06,deer,2
She wants to write a shell script that takes a species as the first command-line argument and a directory as the second argument. The script should return one file called
species.txt
containing a list of dates and the number of that species seen on each date. For example using the data shown above,rabbit.txt
would contain:2013-11-05,22 2013-11-06,19
Put these commands and pipes in the right order to achieve this:
cut -d : -f 2 > | grep -w $1 -r $2 | $1.txt cut -d , -f 1,3
Hint: use
man grep
to look for how to grep text recursively in a directory andman cut
to select more than one field in a line.An example of such a file is provided in
data-shell/data/animal-counts/animals.txt
Solution
grep -w $1 -r $2 | cut -d : -f 2 | cut -d , -f 1,3 > $1.txt
You would call the script above like this:
$ bash count-species.sh bear .
Little Women
You and your friend, having just finished reading Little Women by Louisa May Alcott, are in an argument. Of the four sisters in the book, Jo, Meg, Beth, and Amy, your friend thinks that Jo was the most mentioned. You, however, are certain it was Amy. Luckily, you have a file
LittleWomen.txt
containing the full text of the novel (data-shell/writing/data/LittleWomen.txt
). Using afor
loop, how would you tabulate the number of times each of the four sisters is mentioned?Hint: one solution might employ the commands
grep
andwc
and a|
, while another might utilizegrep
options. There is often more than one way to solve a programming task, so a particular solution is usually chosen based on a combination of yielding the correct result, elegance, readability, and speed.Solutions
for sis in Jo Meg Beth Amy do echo $sis: grep -ow $sis LittleWomen.txt | wc -l done
Alternative, slightly inferior solution:
for sis in Jo Meg Beth Amy do echo $sis: grep -ocw $sis LittleWomen.txt done
This solution is inferior because
grep -c
only reports the number of lines matched. The total number of matches reported by this method will be lower if there is more than one match per line.Perceptive observers may have noticed that character names sometimes appear in all-uppercase in chapter titles (e.g. ‘MEG GOES TO VANITY FAIR’). If you wanted to count these as well, you could add the
-i
option for case-insensitivity (though in this case, it doesn’t affect the answer to which sister is mentioned most frequently).
find
While grep
finds lines in files,
the find
command finds files themselves.
Again,
it has a lot of options;
to show how the simplest ones work, we’ll use the directory tree shown below.
Nelle’s writing
directory contains one file called haiku.txt
and three subdirectories:
thesis
(which contains a sadly empty file, empty-draft.md
);
data
(which contains three files LittleWomen.txt
, one.txt
and two.txt
);
and a tools
directory that contains the programs format
and stats
,
and a subdirectory called old
, with a file oldtool
.
For our first command,
let’s run find .
(remember to run this command from the data-shell/writing
folder).
$ find .
.
./data
./data/one.txt
./data/LittleWomen.txt
./data/two.txt
./tools
./tools/format
./tools/old
./tools/old/oldtool
./tools/stats
./haiku.txt
./thesis
./thesis/empty-draft.md
As always,
the .
on its own means the current working directory,
which is where we want our search to start.
find
’s output is the names of every file and directory
under the current working directory.
This can seem useless at first but find
has many options
to filter the output and in this lesson we will discover some
of them.
The first option in our list is
-type d
that means ‘things that are directories’.
Sure enough,
find
’s output is the names of the five directories in our little tree
(including .
):
$ find . -type d
./
./data
./thesis
./tools
./tools/old
Notice that the objects find
finds are not listed in any particular order.
If we change -type d
to -type f
,
we get a listing of all the files instead:
$ find . -type f
./haiku.txt
./tools/stats
./tools/old/oldtool
./tools/format
./thesis/empty-draft.md
./data/one.txt
./data/LittleWomen.txt
./data/two.txt
Now let’s try matching by name:
$ find . -name *.txt
./haiku.txt
We expected it to find all the text files,
but it only prints out ./haiku.txt
.
The problem is that the shell expands wildcard characters like *
before commands run.
Since *.txt
in the current directory expands to haiku.txt
,
the command we actually ran was:
$ find . -name haiku.txt
find
did what we asked; we just asked for the wrong thing.
To get what we want,
let’s do what we did with grep
:
put *.txt
in quotes to prevent the shell from expanding the *
wildcard.
This way,
find
actually gets the pattern *.txt
, not the expanded filename haiku.txt
:
$ find . -name "*.txt"
./data/one.txt
./data/LittleWomen.txt
./data/two.txt
./haiku.txt
Listing vs. Finding
ls
andfind
can be made to do similar things given the right options, but under normal circumstances,ls
lists everything it can, whilefind
searches for things with certain properties and shows them.
As we said earlier,
the command line’s power lies in combining tools.
We’ve seen how to do that with pipes;
let’s look at another technique.
As we just saw,
find . -name "*.txt"
gives us a list of all text files in or below the current directory.
How can we combine that with wc -l
to count the lines in all those files?
The simplest way is to put the find
command inside $()
:
$ wc -l $(find . -name "*.txt")
11 ./haiku.txt
300 ./data/two.txt
21022 ./data/LittleWomen.txt
70 ./data/one.txt
21403 total
When the shell executes this command,
the first thing it does is run whatever is inside the $()
.
It then replaces the $()
expression with that command’s output.
Since the output of find
is the four filenames ./data/one.txt
, ./data/LittleWomen.txt
, ./data/two.txt
, and ./haiku.txt
,
the shell constructs the command:
$ wc -l ./data/one.txt ./data/LittleWomen.txt ./data/two.txt ./haiku.txt
which is what we wanted.
This expansion is exactly what the shell does when it expands wildcards like *
and ?
,
but lets us use any command we want as our own ‘wildcard’.
It’s very common to use find
and grep
together.
The first finds files that match a pattern;
the second looks for lines inside those files that match another pattern.
Here, for example, we can find PDB files that contain iron atoms
by looking for the string ‘FE’ in all the .pdb
files above the current directory:
$ grep "FE" $(find .. -name "*.pdb")
../data/pdb/heme.pdb:ATOM 25 FE 1 -0.924 0.535 -0.518
Matching and Subtracting
The
-v
option togrep
inverts pattern matching, so that only lines which do not match the pattern are printed. Given that, which of the following commands will find all files in/data
whose names end ins.txt
but whose names also do not contain the stringnet
? (For example,animals.txt
oramino-acids.txt
but notplanets.txt
.) Once you have thought about your answer, you can test the commands in thedata-shell
directory.
find data -name "*s.txt" | grep -v net
find data -name *s.txt | grep -v net
grep -v "net" $(find data -name "*s.txt")
- None of the above.
Solution
The correct answer is 1. Putting the match expression in quotes prevents the shell expanding it, so it gets passed to the
find
command.Option 2 is incorrect because the shell expands
*s.txt
instead of passing the wildcard expression tofind
.Option 3 is incorrect because it searches the contents of the files for lines which do not match ‘net’, rather than searching the file names.
Binary Files
We have focused exclusively on finding patterns in text files. What if your data is stored as images, in databases, or in some other format?
A handful of tools extend
grep
to handle a few non text formats. But a more generalizable approach is to convert the data to text, or extract the text-like elements from the data. On the one hand, it makes simple things easy to do. On the other hand, complex things are usually impossible. For example, it’s easy enough to write a program that will extract X and Y dimensions from image files forgrep
to play with, but how would you write something to find values in a spreadsheet whose cells contained formulas?A last option is to recognize that the shell and text processing have their limits, and to use another programming language. When the time comes to do this, don’t be too hard on the shell: many modern programming languages have borrowed a lot of ideas from it, and imitation is also the sincerest form of praise.
The Unix shell is older than most of the people who use it. It has survived so long because it is one of the most productive programming environments ever created — maybe even the most productive. Its syntax may be cryptic, but people who have mastered it can experiment with different commands interactively, then use what they have learned to automate their work. Graphical user interfaces may be easier to use at first, but once learned, the productivity in the shell is unbeatable. And as Alfred North Whitehead wrote in 1911, ‘Civilization advances by extending the number of important operations which we can perform without thinking about them.’
find
Pipeline Reading ComprehensionWrite a short explanatory comment for the following shell script:
wc -l $(find . -name "*.dat") | sort -n
Solution
- Find all files with a
.dat
extension recursively from the current directory- Count the number of lines each of these files contains
- Sort the output from step 2. numerically
Key Points
find
finds files with specific properties that match patterns.
grep
selects lines in files that match patterns.
--help
is an option supported by many bash commands, and programs that can be run from within Bash, to display more information on how to use these commands or programs.
man [command]
displays the manual page for a given command.
$([command])
inserts a command’s output in place.
SSH Authentication
Overview
Teaching: 15 min
Exercises: 0 minQuestions
Why use SSH key authentication?
How do I set up SSH key authentication?
Objectives
Set up SSH key authentication between your laptop and COLA servers.
What is SSH and what are SSH keys?
SSH, or secure shell, is an encrypted protocol used to communicate remotely with servers from another computer. When working on Unix or Linux servers like the COLA computers, you will frequently be connecting via terminal sessions using SSH.
SSH keys provide an extremely secure way of logging in that does not require remembering and typing your password each time. It is highly recommended as a safe, secure, and more convenient way to work on the COLA computers from your personal computer.
How does an SSH key work?
Surprisingly, password authentication is not the most secure way to use SSH. Although passwords are sent between client (your laptop) and server securely, they are usually not complex or long enough to resist hacking. Plus, passwords are vulnerable to being stolen visually by prying eyes or inattentive habits like keeping passwords written on a piece of paper.
SSH key pairs are two cryptographically secure keys that can be used for authentication between an SSH client and an SSH server. Each key pair consists of a public key and a private key. The private key is kept by the client. Any compromise of the private key will allow an attacker to access servers that are configured with the associated public key, so it should be kept secure and private. As an additional precaution, the key can be encrypted with a passphrase which makes this approach twice as secure.
The associated public key can be shared freely without any negative consequences, as it is encrypted in a way that can only be opened in conjunction with the private key.
The public key is uploaded to the server you want to log into with SSH.
The key is added to a special file in your home directory on the server at ~/.ssh/authorized_keys
.
On Windows computers…
On Windows, the MobusXterm software has menu settings to generate and use SSH keys for logging into remore servers. Follow the MobusXterm documentation for the version on your computer, as the method has varied with different software versions.
On Macs and Linux laptops…
The first step to configure SSH key authentication is to generate an SSH key pair on your local computer.
To do this on a Mac or Linux system, use the ssh-keygen
command. By default, this will create a 3072 bit RSA key pair.
On your personal computer, you can open a new terminal session and generate a SSH key pair by typing ssh-keygen
.
But before doing so, check to see if you have already done this previously. Evidence will be in the hidden .ssh
directory under your home directory.
To check, type:
$ ls ~/.ssh
If you get a response like:
-bash: cd: /home/username/.ssh: No such file or directory
Then you can proceed with the next steps. Otherwise, you can skip ahead to Copying an SSH public key to your server.
To generate a SSH key pair, type:
$ ssh-keygen
You will be offered a chance to select a location for the new keys.
Usually, it is best to stick with the default location.
The command will generate two files in your ~/.ssh
directory The private key will be called id_rsa
and the public key will be id_rsa.pub
.
If you had previously generated a SSH key pair, you will see a warning prompt that looks like this:
/home/username/.ssh/id_rsa already exists.
Overwrite (y/n)?
If you choose to overwrite the key on disk, you will not be able to access any other previously authenticated servers automatically anymore. You will have to reestablish authentication with the newly generated key. So, be very careful when selecting yes, as this is a destructive process that cannot be reversed.
Next you will be promted for an optional passphrase:
Created directory '/home/username/.ssh'.
Enter passphrase (empty for no passphrase):
This can be used to encrypt the private key file on disk. This provides an extra layer of protection in case your laptop is stolen, hacked or infected with certain malware, helping to prevent an attacker from further gaining access to your COLA computing (or any other SSH key authenticated) account. Advantages include:
- The private SSH key is never exposed on the network. The passphrase is only used to decrypt the key on your computer locally. This means that network-based hacking will not be effective against the passphrase.
- The private key is kept within a restricted directory. The SSH client will not recognize private keys that are not kept in restricted directories.
The key itself must also have restricted permissions (read and write only available for the owner,
i.e. you must see
-rw-------
when you usels -l ~/.ssh/id_rsa
or authentication will fail). This means that if you have multiple userids on your computer, they cannot snoop the contents of your private key file. - Any attacker hoping to crack the private SSH key passphrase must already have access to your computer. This means that they would already have access to your user account or the root account. If you are in this position, a passphrase can prevent the attacker from immediately logging into other servers from your computer and doing more damage.
Once you have completed creating your private and public key pair, you will get a response saying the keys have been created, and also:
- A key fingerprint, which will be a long string of random characters
- A randomart image
Copying the public key to your server
There are several ways to upload your public key to the COLA computer. We describe two, the easiest and the most surefire.
Using ssh-copy-id
The easiest way, if the command is available on your computer, is to use the ssh-copy-id
command.
The syntax is:
$ ssh-copy-id username@cola1.gmu.edu
Where username
is your username.
You should see a message like this:
The authenticity of host 'cola1.gmu.edu (129.174.129.11)' cant be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)?
The hexcode in the “fingerprint” will be different for you, but this is the expected message the first time you connect to a new host in this way.
Type yes
and press ENTER to continue.
Next, you will be prompted for the password on the COLA servers. type it in and press ENTER. You are done! You will not have to enter your password again until it expires or you change your key.
Manually copying your public key
If the approach above does not work for you, you can do the above process manually.
The content of your id_rsa.pub
file will have to be added to a file at ~/.ssh/authorized_keys on the COLA computer system.
To display the content of your id_rsa.pub key, type this into your local computer:
$ cat ~/.ssh/id_rsa.pub
You will see the key’s content, which may look something like this:
~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== username@hostname
Open a new terminal window and ssh
into cola1.gmu.edu
using your password.
Make sure that the ~/.ssh directory exists. If it does not, create it:
$ mkdir ~/.ssh
Now, you can create or modify the authorized_keys file within this directory. You can paste the contents of your id_rsa.pub file to the end of the authorized_keys file, creating it if necessary, using this:
$ echo [public_key_string] >> ~/.ssh/authorized_keys
But replace the [public_key_string]
with the output from the cat ~/.ssh/id_rsa.pub command that you executed on your local system.
It should start with ssh-rsa AAAA
… or something similar.
You can use ctl-C (or on Mac, cmd-C) to copy and ctl-V (cmd-V) to past the text between terminal windows.
Authenticating using SSH keys
Now you should be able to log into any of the COLA computers (not only cola1
) without a password!
The process is mostly the same as what you have already done. For example, for cola1
:
$ ssh -Y -l username@cola1.gmu.edu
If you used the manual copy method above, you may see something like this:
The authenticity of host 'cola1.gmu.edu (129.174.129.11)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)?
The hexcode in the “fingerprint” will be different for you, but this is the expected message the first time you connect to a new host in this way.
Type yes
and press ENTER to continue.
This kind of message will show up the first time you log into any of the COLA servers cola1
through cola7
.
But since they all share the same file system, your home directory (and its contents) are indentical on all COLA servers,
you don’t have to copy your public key to each one.
If you did not supply a passphrase for your private key, you will be logged in immediately. If you supplied a passphrase for the private key when you created the key, you will be required to enter it now.
Hereafter, you will be in a new bash
shell session on the remote system.
This episode is based on the tutorial at: https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server
Key Points
Unix shells can be launched in a customized way with the user’s preferences.
Aliases can be defined that substitute short strings for long or complex commands.