1. Linux Tips¶
A collection dating back to 2001.08.23.
The following tips are ones that I’ve become tired of looking up in the
man
andinfo
pages, (or searching the Internet for) then whittling down to their barest essences.NOTE:
«text within French quotes»
indicates variable text – often a filename.
1.1. Redirecting Standard Error to Standard Out¶
The proper way to redirect stderr
is to first decide where
stdout
is going, and THEN redirect the stderr
to
stdout
. So, for example:
$ «yada-yada» | «pager» 2>&1
takes the output of «yada-yada»
and pipes it into «pager»
,
then tells the system to send stderr
to wherever stdout
is
going. (A pager is a program that allows a user to scroll through long
documents. I use one called most
, but older, commonly used ones
include more
and less
.)
1.2. Making a patch file¶
Assume we have a Red Hat source RPM:
$ rpm -Uvh «package».src.rpm
$ cd /usr/src/redhat/SPECS
$ rpm -bp «package».spec
$ cd /usr/src/redhat/BUILD
$ mv «package» «package».orig
$ cd /usr/src/redhat/SPECS
$ rpm -bp «package».spec
$ cd /usr/src/redhat/BUILD
(edit to your heart's content)
$ diff -Naur «package».orig «package» > ../SOURCES/«package».patch
$ rm -rf «package».orig
$ cd /usr/src/redhat/SPECS
$ emacs «package».spec
...
Source: ...
Patch: «package».patch
...
%prep
%setup
%patch -p 1
^X^S^X^C
$ rpm -ba «package».spec
The idea is to create two directories with identical contents, then modify one of them. Create a diff file of the changes and save it. I THINK I got all the basic steps in there… However, it may be necessary to create the tarball too, in which case you need something like:
$ tar czvf «archive».tar.gz «directory_to_archive»
1.3. Copying directory trees¶
Often, it becomes necessary to copy entire directory trees from one directory to another. The method I saw somewhere was:
$ cd «olddir» ; tar cf - . | (cd «newdir» ; tar xpf -)
This creates a tarball that never actually becomes a file. The tarball is piped directly to a little script subroutine which changes to the new directory, and untars the tarball on the fly.
According to JP Abgrall, there’s an optimized way to do this:
$ tar cf - -C «olddir» . | tar xpf - -C «newdir»
1.4. Getting landscape output¶
It looks like mpage
will do the trick:
$ mpage -1lvH «filename» | lpr
The man page suggests that there’s a way to pass pr
switches to
mpage
(using -p
instead of -H
), but I’ve been unable to
pass the line-numbering switch -n
together with pr
into
mpage
. (What I want is the headings a la -H
together with line
numbering.)
A fancier way, requiring a bit more study, is to use enscript
. In
fact, enscript
is neat for LOTS of stuff – customizable layout,
“Page X of Y”, line numbers, etc.
1.5. Verifying Red Hat packages¶
You can verify each package with the following command:
$ rpm --checksig
If you only wish to verify that each package has not been corrupted or tampered with, examine only the md5sum with the following command:
$ rpm --checksig --nopgp
1.6. Security: Watching the watchers…¶
The netstat
command is a handy tool for seeing who’s poking around
at any given moment:
$ netstat -v | most
1.7. Stripping comments:¶
Assuming you have a script that uses the number sign (a.k.a. pound symbol, hash mark) as a comment character, and you wish to examine only those lines containing “active” commands and options, the following will produce such a listing:
$ grep -v "^\#" «scriptfile» | grep -v "^[[:space:]]*$"
What’s happening: The file is first stripped of lines beginning with
#
. Then, that result is stripped of any lines which have 0 or
more whitespace characters (and nothing else) between the start and
end of the line.
1.8. Verifying all RPM’s¶
Here’s a small script that constructs a list of all package names sans
version numbers, then feeds the list to rpm
with the --verify
option. It also echoes the package name:
$ for i in $(rpm -qa --queryformat "%{NAME}\n" | sort)
$ do
$ echo $i":" >>verify.log 2>&1
$ rpm --verify $i >>verify.log 2>&1
$ done
The stuff enclosed in $(...)
gets run as a script within a script,
and the output of that is fed to the outer script. (See man eval
and other stuff about evaluating.)
1.9. Viewing post-installation RPM scripts¶
Occasionally, after the files are dropped onto the system, hither and yon, RPM will run a script embedded in the package file. It’s nice to see what the squirrels are doing under the hood:
$ rpm -q --scripts «package-name»
1.10. What are we listening to?¶
lsof
shows which processes are listening on a given port. (Actually
it stands for “list open files”, which shows what files are currently
open.) lsof -i
will list which ports are open on the machine:
$ lsof -i
1.11. Finding duplicate files with identical contents¶
There’s probably a better way, but this worked for me:
$ diff -qrs «directory1» «directory2» 2>&1 | \
grep "identical$" > «unedited-shell-script.sh»
Then edit the file «unedited-shell-script.sh»
to your heart’s content.
1.12. Listing DNS stuff¶
Lots of different ways to do this, but I like:
$ nslookup
> ls -d gallaudet.edu
1.13. Who’s been sleeping in MY bed?¶
Here’s a more informative way to use the last
command:
$ last -adf «wtmp_file»
1.14. Clip-clip. Taking care of really LONG lines¶
Lots of times, we only need to see the beginning of lines in a file to
determine something. (For example, a subroutine that takes a single
string argument may have a really long string literal.) To see just the
first 100 characters on a line use the cut
command. Like so:
$ cut -b -100 «FY2000.sql» | land
(land
is an alias
I’ve set up to print a file in landscape
orientation using enscript
command – a very nice printing
program.)
1.15. Find and delete¶
Sometimes it’s nice to do something (like delete) a bunch of files based on a searchable criteria, e.g. portion of the filename, size, date, etc. Here’s how:
$ find . -empty -exec rm -v {} \;
This is a specific example that searches for empty files and
directories from the current working directory down, and then it
deletes them. The important parts are the {}
which gets replaced
with whatever has been found, and the \;
, an escaped semicolon
indicating the end of the command to be “exec’ed”. (And the -v
is
the ubiquitous “verbose” option, to tell you what’s happening.)
A better approach, I’m told, is:
$ find . -empty | xargs rm -v
1.16. Formatting and using a floppy¶
Without mounting anything, just pop a floppy in the drive and:
$ fdformat /dev/fd0H1440
$ mke2fs /dev/fd0H1440
$ mount /dev/fd0 /mnt/floppy ...
$ umount /mnt/floppy
1.17. Mounting an NSF device¶
To mount an NSF disk:
$ mount «server»:«remote_directory» /«local_mount_point» -t nfs -o ro
1.18. Searching for files and manipulating them¶
To find files in or beneath the current directory, of type “file”, of size 800 KB or greater, and then pipe the results through the ls command:
$ find . -type f -size +800k -exec ls -l {} \;
As mentioned in an earlier tip, it’s better with xargs. The command above could be improved as:
$ find . -type f -size +800k | xargs ls -l
To find all files that match the pattern *.o
, print the full
filespec (%p
), the last-access time (%a
) and the last-modified
time (%t
), and prompt for deletion:
$ find . -name "*.o" -printf "%p\nA: %a\nM: %t\n" -exec rm -i {} \;
To find files whose data has changed since midnight:
$ find . -daystart -mtime 0
(The -mtime
can be replaced with the -ctime
to show files
whose status has changed.)
1.19. Starting a remote X windows program on a local screen¶
I must have had to do this at some point:
$ xon «remote_host» \
-access \
-user «remote_username» \
«remote_program_path»
1.20. Making a boot floppy¶
What the heck is a “floppy”? Well, if you have one:
$ dd bs=8192 if=/vmlinuz of=/dev/fd0
That allowed me to recover from a machine that someone infected with a boot sector virus. I still had to rebuild the kernel.
1.21. Switching parallel from printer to ZIP¶
Speaking of dead hardware… ZIP drives that connect to the parallel port:
$ modprobe -r lp
$ modprobe ppa
$ mount /dev/sdc4 /mnt/ZIP
1.22. More fun with RPM’s¶
That --queryformat
be some powerful voodoo. Convert the datetime
tags to human readable form via the system command:
$ convdate -c `rpm -q --queryformat "%{INSTALLTIME}\n" «package name» `
1.23. Burning CD’s¶
Unfortunately, with no CD-burner on any of the Linux boxes, you have to resort to M$ to do the actual burn. But just copying the files and trying to burn things didn’t seem to work. So, on a Linux box, create a disk image, then move the image to a M$ machine. Like so:
$ mkisofs -vrTJV "«Volume Label»" -o «image filename».iso «root directory»/
$ mount -t iso9660 -o ro,loop=/dev/loop0 «image filename».iso /mnt/cdrom
The first line makes an ISO file system and write it to a file. The
.iso
just makes it easier for the Windoze software to recognize it
as a CD image. The command line options used are:
-v
verbose,-r
Rockridge extensions,-T
makeTRANS.TBL
files,-J
Joliett extensions,-V
Volume label,-o
output file.
The second line tests the image, by mounting it as though it were a real device.
Fancying the image creation up a bit, the following puts an abstract
on the CD and hides the TRANS.TBL
from systems that can handle
long file names:
$ mkisofs -vrTJV "«Volume Label»" \
-abstract "«Short description»" \
-hide-joliet-trans-tbl \
-o «image filename».iso «root directory»/
And a variation with some Macintosh options thrown in:
$ mkisofs -vrJV "«Volume Label»" \
-hfs \
-magic «magic file» \
-probe \
-o «image filename».iso «root directory»/
The magic file helps mkisofs
determine which CREATOR
and
TYPE
to use so that a Macintosh knows how to open the files. It
appears the magic file is only needed if the system cannot already
determine what the file is by examining the first few bytes. (I used
/dev/null
for the magic file.)
If you DO have a burner on your box, you can add the command:
$ cdrecord -v -speed=«##» dev=«#,#,#» -data «image filename»
In my case the -speed
is 10
and the dev
is 2,1,0
. This
burns the image file created by the mkisofs
command onto
your CD. If you don’t know the dev
, you can find it with one of
the following two lines:
$ cdrecord -scanbus
$ cdrecord dev=ATA -scanbus
depending on your kernel version and your CD burner controller. The second version picks up an ATA CD-burner under kernel 2.6.
1.24. Turning off NetQUE broadcasts¶
NetQue boxes attached to dumb printers keep sending RWHO
packets
(UDP/513
) all over campus. This is annoying. To turn it off:
telnet
into the NetQueType
SU
at the prompt.It will display a
Password>
prompt. Type aControl-H
(ASCII backspace) and thenSYSTEM
and hit enter. (SYSTEM
is the default password.)If you get to the prompt, type
DEFINE SERVER ANNOUNCEMENT DISABLE
Type
SYNC
Type
LO
Power-cycle the printer server for it to take affect.
1.25. Checking out from CVS¶
I don’t yet understand what I’ve done, but apparently, I got it right. The following pulled the latest version of Boa Constructor:
$ cvs -z3 \
-d:pserver:anonymous@cvs.Boa-Constructor.sourceforge.net:/cvsroot/boa-constuctor \
co boa
1.26. Streaming with Icecast and Darwin¶
Icecast
streams MP3
and Ogg Vorbis
, Darwin is Apple’s
QuickTime
streamer. Again, I’m not certain of all the details,
but I’ve got them both going.:
$ icecast -b
$ liveice -F ~/liveice.test 2> /dev/null
$ /usr/local/sbin/streamingadminserver.pl
The first line starts Icecast
listening. The second sends a stream to
Icecast
for rebroadcast. The third starts the Darwin server. Be sure to
check the configuration files in /etc/icecast
and ~/liveice.test
.
1.27. Pretty-printing code as web pages¶
My favorite program lister enscript
, can generate color-coded web
pages, as well as color-coded Postscript. Separate colors are used for
comments, keyowrds, functions, and quoted strings. To generate a page,
complete with a table of contents, the magic is:
$ enscript -E -C -G -j -Whtml --color --toc -p«output».html \
«program1» [«program2» «program3» ...]
1.28. md5sum¶
MD5 checksums are frequently distributed with files to be downloaded, in
an effort to insure data integrity. The program md5sum
for Linux is
fairly easy to find, and may already be on your system.
To check a file, download the corresponding MD5SUM
file (possibly
named «filename».md5
) to the same directory where the files to be
checked live. Then issue the command:
$ md5sum -c «filename».md5
To create an MD5 checksum file for others to use against your files, issue the command:
$ md5sum «filenames» > «filename».md5
I donwnloaded a Microsoft DOS/Windows version of md5sum
from
https://etree.org/md5com.html. The web page suggests the directory in
which to save the program. It needs to be run from the DOS command
prompt.
1.29. Slow Hand¶
The problem: During a rescue operation, I needed to copy a HUGE file. Unfortunately, while booted up in Red Hat’s rescue mode, memory management seems to have some problems. Every attempt to copy this would go for a while then run out of memory and force a reboot.
Solution: I hypothesized that if I could slow the machine down, I might give it time to recover its memory. (I know I forget things when I think too fast.)
So, how?
Break the file into chunks and copy a chunk at a time, with delays between chunks.
The file (a bzipped tarball) was 689459745 bytes long.
dd
(a file copying program) writes nulls when it hasn’t got any data. So, I couldn’t write more data than was actually in the original file. Otherwise I’d end up with nulls at the end.That means, the blocksize times the number of blocks had to exactly match the file size.
I found a web page with a factoring calculator, and learned that the prime factors of 689459745 are 3, 5, 13, 19, 379 and 491.
Armed with that info, I opted for 379 blocks of 1819155 bytes each.
Finally, I wrote a little script
#!/bin/sh
# SLOW DOWN! Copy slooooowly, and provide a running
# progress report comparing the file sizes of the
# two files in question. When done, compute the MD5
# checksum for each file, for comparison.
#
# NOTE: A 1-second delay wasn't enough. A 10-second
# delay was, but it was also probably overkill.
cd /mnt/sysimage/usr/share
rm /mnt/jaz/home.tbz
touch /mnt/jaz/home.tbz
ls -al home.tar.bz2
ls -al /mnt/jaz/home.tbz
sleep 1
for ((blk = 0; blk < 379; blk++))
do
dd if=home.tar.bz2 of=/mnt/jaz/home.tbz \
bs=1819155 count=1 \
seek=$blk skip=$blk
ls -al home.tar.bz2
ls -al /mnt/jaz/home.tbz
sleep 10
done
md5sum home.tar.bz2 /mnt/jaz/home.tbz
UPDATE: Apparently, I misread or misunderstood the dd
command. It
doesn’t pad its output with NULL’s unless you explicitly ask it to by
using the conv=sync option. So any reasonable block size should have
worked above…
Bob Solomon <wogsol (at) bestweb (dot) net> wrote to me about a different way to split up a file: Given a list of files that you want to tar, but make into several “chunks”:
$ tar -czf - «file1 file2 file3 ...» | split -b «###»m - «filename».tgz.
(where ###
is a block size in megabytes.) This creates files
filename.tgz.aa
, filename.tgz.ab
, filename.tgz.ac
… and
so on, with each file being ### MB long.
(Bob also uses an environment variable $DATE
which he sets to the
current date in the form yy-mm-dd
, in the base filename of the
split
command.)
1.30. Synchronizing with rsync¶
To copy big directory trees across the entire universe, and maintain
protections, user and group ID’s etc, use rsync
. It appears to
have a few kinks–like it’s slow as molassas on my machine, I think I
caused a kernel panic the first time I used it, and now that it’s
finished a copy it appears to be hanging, but it gets the job done.
Important tip: It seems to do better at pushing files out to the remote machine, rather than pulling from the remote:
$ rsync -avz --rsh=ssh «local_directory_tree» \
«remote_machine»:«remote_destination»
1.31. Exploring binary RPM’s without installing them¶
Sometimes it’s nice to see what’s in an RPM file without actually
installing it. If you have the source RPM (.src
) then it’s
easy. Just make the binary. But if you don’t have it and don’t want to
bother getting it, you can extract the CPIO
“heart” of the RPM and
explore that. (cpio
= copy in and out.):
$ rpm2cpio «package».rpm > «package».cpio
$ cpio -it --verbose < «package».cpio | most
$ cpio -id --verbose < «package».cpio
The first line pulls out the CPIO from the RPM file. The second gives a verbose listing of the contents of the file, and the third actually does the extraction, forcing the creation of directories that aren’t already present.
1.32. Renaming all files in a directory¶
Sometimes you want to rename all the files in a directory, and the new names will in some way be based on the old names. Here’s one way to tackle the problem (not necessarily the best way):
$ ls | grep -v "[on]names" > onames
$ ls | grep -v "[on]names" > nnames
# (edit nnames and change the old filenames to their new filenames.)
# (edit onames and insert "mv " at the start of each line.)
$ paste onames nnames > script.sh
$ bash script.sh
$ rm onames nnames script.sh
The first two lines make identical files containing all the filenames in
the directory, sans the two files being created. The paste
command in
step 5 puts them together, line by line, so you end up with several lines
of mv old-name new-name
, which you just push through your shell.
If all you want to do is lowercase the names, this will do the trick:
$ for i in *; do mv $i $(echo $i | tr [A-Z] [a-z]); done
1.33. “rpm -qil” in Debian¶
To query a Debian package and obtain both a package description and a list of files within the package, use the command:
$ (dpkg -p «package» ; dpkg -L «package» ) | «pager»
To just get a list of ALL packages (installed and uninstalled, use:
$ dpkg -l '*'
1.34. Handling NULL’s in PostgreSQL¶
The COALESCE
function is your friend. It allows you to substitute
a string for a NULL
value. The example below shows how to combine
several fields together into a single string:
SELECT COALESCE(name,'') || '-' ||
COALESCE(version,'') || '-' ||
COALESCE(release,'') || '\n\t' ||
COALESCE(summary,'') || '\n\t' ||
COALESCE(url,'')
FROM gri
WHERE name ~* '.*devel.*' and release ~* '.*ximian.*'
ORDER BY name;
When run on my database of installed RPMs, that produces output like this:
...
sane-backends-devel-1.0.8-1.ximian.1
The SANE (a universal scanner interface) development toolkit.
https://www.sane-project.org/
SDL-devel-1.2.4-1.ximian.3
Files needed to develop Simple DirectMedia Layer applications.
https://www.libsdl.org/
xmms-devel-1.2.7-1.ximian.4
XMMS - Static libraries and header files.
https://www.xmms.org/
(28 rows)
1.35. Using BitTorrent with RedHat¶
According to https://www.redhat.com/en, RPMS for Red Hat Linux 7.3 through 9 of BitTorrent are available from:
Usage is simple:
$ btdownloadcurses.py --url «https://URL.torrent»
Allow incoming TCP 6881 - 6889 to join the torrent swarm.
1.36. Decoding base-64 encoded text¶
If you end up with a file that is base-64 encoded, fetch a copy of
uudecode
and add a line to the top of the base-64 encoded file (if
it isn’t there already) that looks like:
begin-base64 664 «ASCII-file»
Then issue the command:
$ uudecode -m «b64-file»
This should read in b64-file
and create ASCII-file
.
1.37. Optimizing SQL field lengths¶
Obvious, when one thinks about it, but… To obtain the lengths of all entries for a particular field, use:
SELECT LENGTH(«field_name») AS «column_name»
FROM «table_name»
GROUP BY «column_name»;
To list only the length of the longest entry in a field, use:
SELECT MAX(LENGTH(«field_name»)) AS «column_name»
FROM «table_name»;
1.38. A light at the end of the tunnel¶
I don’t yet consider myself an expert by any means, but I’m making progress understanding tunneling. Here’s an example:
$ ssh -L 7000:134.231.8.45:80 -l kevin.cole gallaudet.edu
http://localhost:7000/
The first line establishes a tunnel on the localhost
, going out
port 7000
to 134.231.8.45
, port 80
, via ssh logged in as
kjcole
on gri.gallaudet.edu
.
The second line is the URL to make use of the above tunnel, effectively connecting to https://134.231.8.45/.
1.39. Paper size¶
To switch to 8.5 * 11 paper size:
$ cd /usr/share/libgnomeprint/.../printers/
Edit the files GENERIC.xml
and PDF-WRITER.xml
. Change the
PhysicalSize
from A4
to USLetter
(no spaces).
1.40. GNU Privacy Guard (GPG) tip¶
Apparently, the GPG honor-http-proxy keyserver-option
is
buggy. (Either that, or privoxy
is.) So, in order to use commands
like:
$ gpg -v --keyserver x-hkp://pgp.mit.edu --refresh-keys
or:
$ gpg -v --keyserver x-hkp://pgp.mit.edu --send-keys
remember to issue the shell command unset http_proxy
first.
1.41. Searching for strings using grep the right way¶
I was constantly annoyed by grep
hanging indefinitely when
searching recursively. One possible reason for the trouble was that
grep
would encounter a “file” which was in fact a pipe or other
weird beastie that doesn’t really have a beginning or end. As a
result, grep
would search such a file indefinitely. So, instead,
use find to guarantee that grep
only searches actual normal
files:
$ find «path» -type f | xargs \
grep -H "«And I still haven't found what I'm looking for»"
And, an often related nusance: When find encounters a filename with
spaces, what it pipes to grep
ends up as several arguments. When
it finds a file with a filename like “Scholarly Work.txt
”,
grep
interprets as a file named Scholarly and a second file named
Work.txt. Not at all what I had intended. So, in the simple case, a
solution is:
$ find «path» -type f -exec \
grep -Hil "«And I still haven't found what I'm looking for»" {} \;
grep
receives each filename supplied by find whole and intact.
1.42. Handling files with spaces in the name¶
Those nasty Mac OS X people, and later those nasty Windows folks have made life messy for us saints of Free / Libre Open Source Software. But, there is hope:
#!/bin/bash
#
# This script iterates through several file types and makes substitutions
# within them. It's done this way to get around funky directory and file
# names containing spaces. The IFS indicates an inter-file separator,
# in this case NUL. The "set -f" turns off pathname expansion, allowing
# the "htm*" and "php*" to be passesd as-is to the find command. And
# finally, the ${1:-.} says to use a dot (current directory) if no
# directory path is supplied on the command line.
IFS="$(echo -ne '\000')"
set -f
for filetype in "htm*" "php*" "py" "cgi" "c" "pl" "txt" \
"js" "asp" "shtm*" "pm" "java" "lore" "kid"
do
find ${1:-.}/ -iname "*.$filetype" -print0 | while read -d "$IFS" file
do
echo "\"$file\""
perl -p -i -e "s|\<i\>|\<em\>|g;" "$file"
perl -p -i -e "s|\<I\>|\<em\>|g;" "$file"
perl -p -i -e "s|\</i\>|\</em\>|g;" "$file"
perl -p -i -e "s|\</I\>|\</em\>|g;" "$file"
perl -p -i -e "s|\<b\>|\<strong\>|g;" "$file"
perl -p -i -e "s|\<B\>|\<strong\>|g;" "$file"
perl -p -i -e "s|\</b\>|\</strong\>|g;" "$file"
perl -p -i -e "s|\</B\>|\</strong\>|g;" "$file"
done
done
1.43. Fancier apache protection¶
First, enable some apache modules: auth_digest
, dav
, and
ssl
. The auth_digest
module enables the use of
better-encrypted usernames and passwords. The dav
module enables
WebDAV
, which allows those with the appropriate permissions to
look at the files in a directory using a file browser / manager, with
drag-n-drop, and the ability to add and delete files to the
directory. And, finally, the ssl
module gets down and dirty with
data transfer encryption. SSL’s a bitch, and therefore not covered
here:
$ a2enmod dav
$ a2enmod auth_digest
$ a2enmod ssl
Now, edit the file containing directives for your web directories (in
the case of Ubuntu, one of the files in /etc/apache2/sites-available/
e.g. default
). Add in a stanza for the URL you want to protect:
<Location «relative URL»>
Order Allow,Deny
Allow from all
Dav On
AuthType Digest
AuthName "«realm»"
AuthDigestDomain «relative URL»
AuthDigestProvider file
AuthUserFile «/path/to/password.file»
Require valid-user
</Location>
Finally, create the password file:
$ cd «/path/to/password.file»
$ htdigest -c «password.file» "«realm»" «username»
The realm is a short description of the area to be protected. The
realm in the htdigest
command should match the realm specified
with the AuthName
directive in the apache configuration
file. Ditto for the path to the password file. The htdigest
command will create (-c
) the password file password.file add
username to it, prompting for a new password.
It appears that it’s also a good idea to match up the argument in the
<Location>
directive with that in the AuthDigestDomain
directive. This should be “relative” to the DocumentRoot
. In other
words, it’s what appears after the https://host.domain.tld/
Addendum: Oh the perversity that is Micro$oft Winblows. Every variation of URL, username, etc. failed to create a mapped network drive. What finally worked… sort of? From inside Micro$oft Weird, opening a Network Place. But, not exactly. You see, it cannot create a new file directly. It needs to create a folder. So, it creates a new FOLDER inside the already shared WebDAV folder. That means all the files we expected to find were one level above the Network Place we created and had to be moved into the newly created directory.
1.44. Restoring files with cp¶
To preserve dates, permissions, links, etc. when copying files:
$ cp -rvP --preserve=all «source directory» «destination»
1.45. QR Codes as SVG¶
qrencode
creates QR Code images as PNG files. If that’s all you
need, then the first line below will suffice. However, if you want to
convert the PNG to an SVG, go through an intermediate step that makes
a BMP:
$ qrencode -s 20 -o «filename».png "«blablabla.bla»"
$ convert «filename».png «filename».bmp
$ potrace -b svg -o «filename».svg «filename».bmp
$ rm «filename».bmp «filename».png
qrencode
produces the PNG (Portable Network Graphic), convert
converts it to a BMP (bitmap), and potrace
converts the BMP to an
SVG. (The rm
removes the first two files.) Use the same filename
extensions as shown above. “blablabla.bla” is just any text you want
to encode. You can also pipe a file to the encoder. -s 20
sets the
dot size to 20 pixels. The default size is 3 pixels. -o
indicates
the name of the output file. -b svg
specifies the “backend” which
tells potrace
the output format. (qrencode
is part of the
qrencode
package, potrace
is part of the potrace
package,
and convert
is part of the imagemagick
package.)
1.46. Inserting lines at the start of a file¶
sed
, the stream editor, is our friend here. To insert a single line:
sed -i '1s/^/«line to be inserted»\n/' «filename»
But a more clever approach that handles more than one line, methinks:
printf '%s\n%s\n' "«text to be inserted»" "$(cat «filename»)" > «filename»
1.47. Floating image in reStructuredText¶
To have an image with text appearing beside it in reStructuredText:
In the .rst
file:
.. container:: twocol
.. container:: leftside
.. figure:: /_static/illustrations/structure.svg
.. container:: rightside
Bla-bla-blah, and yada-yada.
In the custom CSS (I used a copy of sphinxdoc.css
which I put in
./source//_static/
):
div.leftside {
width: 414px;
padding: 0px 3px 0px 0px;
float: left;
}
div.rightside {
margin-left: 425px;
}
Each ..container::
becomes a <div>
. In my case, I wanted a
fixed width for the image and a variable width for the remainder. And,
with a wee bit o’ tweaking of the LaTeX produced by Sphinx, it also
did a decent job of producing two-column output for that section.
1.48. Learning Assembler !!!¶
(We’ll see how long this exercise in terror / futility lasts.)
Write some C code in a file “scratchpad.c
” (or whatever name you’d
like). Then:
$ gcc -S -fverbose-asm scratchpad.c
or:
$ gcc -g -c scratchpad.c
$ objdump -drwC -Mintel scratchpad.o
(The later is actually a dis-assembly from the object binary. The first form is MUCH more useful to me: It interleaves the original C code with the assembly language it generates, making it easy to see “Oh. That ‘if’ statement compiles into these four assembly language instructions.)
I have a feeling there’s much room for improvement on the process
(e.g. adding -O3
to optimize the hell out of the compilation,
adding -g
to the first form to include more debugging info) but
this is quite nice. There are more command-line options for the GCC
compiler than I’ve seen for any other Unix-y / Linux-y thing ever. So,
fine-tuning the output for learnability is for a future date.
Source: StackOverflow, naturally: Using GCC to produce readable assembly?
Continuing on the adventure, the Executable and Linkable Format, better known as ELF:
1.49. Rectangular blocks in emacs¶
I’m sure there was a time when I did this regularly, but it’s been ages! In any case, the magic:
Move the cursor to the starting location.
Mark the current location with
Ctrl-x
SPACE
.Move the cursor to the end location.
Kill the region
Ctrl-x
r
k
Move to the new location
Yank the region
Ctrl-x
r
k
1.50. git “mirroring”¶
With a very small tweak, you can make your local git repositories push out to as many hosting sites as you like. Once per each hosting site that you would like to replicate to (though you have to have created a repository there first, even it’s empty):
git remote set-url --add --push origin «clone URL»
After that, any time you push, it will go to all of the sites that
you’ve used that line for. So now, many of my repositories are pushed
out to Codeberg, GitBox, and GitLab with every “git push
”
command. (”git pull
” will still pull from the one you initially
cloned from, or did your initial setup with.)
NOTE: The remote repository has to exist before adding it to the list of push destinations.
1.51. Copying between two remote machines¶
It turns out that it is as simple as 1, 2, -3
:
$ scp -3 «remote1:/path/to/sources» «remote2:/path/to/destination»
1.52. Sledgehammer “git pull”¶
Probably not the smartest thing I’ve ever done, but at least not
as disasterous as doing pip update
on everything in sight, thus
trashing packages installed by apt
, yum
or pacman
…
This little ditty goes through as many git repositories as it can
find, and tries to issue a git pull
for each of them:
$ for i in $(locate /.git/ | sed -e "s/\/\.git.*//;" | sort | uniq)
$ do
$ cd $i
$ git pull
$ done
(It presumes the locate
package is installed and that updatedb
has been run recently.)
In some cases, this leads to conflicts, merge problems, etc.
1.53. Sucking down a web site directory tree¶
(There’s probably a more clever way with curl
these days…)
When a URL reveals a directory of stuff you want to get, while avoiding the stuff you don’t want:
$ wget -r -np -R "index.html*" -e robots=off «URL»
Option |
Meaning |
---|---|
|
recursive |
|
no parent (don’t go to the root of the URL) |
|
don’t include the index.html files |
|
ignore what robots.txt is telling you to do |
1.54. Stop annoying animated GIFs¶
In Firefox go to the url about:config
and change
image.animation_mode
from normal
to none
.
1.55. Platform-provided Python packages¶
It turns out, that while in the virtual environment, I don’t have access to some of the Python packages installed via apt. In particular, the PyQt5 stuff. But, there’s an answer:
$ cd dirname
$ rm -rf ~/.local/share/virtualenvs/dirname*
$ rm Pipfile.lock
$ pipenv --three --site-packages
$ pipenv shell
$ pipenv update
Passing --site-packages
during the initial pipenv
setup adds
magic to ~/.local/share/virtualenvs/dirname...
or so it would
appear. As near as I can determine, it adds an
include-system-site-packages
line to
~/.local/share/virtualenvs/dirname-.../pyvenv.cfg
. Like so:
home = /usr
implementation = CPython
version_info = 3.8.5.final.0
virtualenv = 20.0.23
include-system-site-packages = true
base-prefix = /usr
base-exec-prefix = /usr
base-executable = /usr/bin/python3.8
prompt = (dirname)
And that’s where it gets the command line prompt as well.
1.56. Pretty-print XML¶
Sometimes one wants to read that billion-character single-line XML file in order to make sense of it. First, set the indentation.
For tab indentation:
export XMLLINT_INDENT=`echo -e '\t'`
For four space indentation:
export XMLLINT_INDENT=\ \ \ \
Then:
xmllint -format -recover nonformatted.xml > formated.xml
1.57. Reformatting XML a la Emacs¶
Actually, I suspect this works for a lot of different source material, provided Emacs recognizes the file type. In emacs parlance:
C-x h C-M-\
In more readable form:
Ctrl-X h Ctrl-Alt-\
Explanation:
Ctrl-X h runs the command "mark-whole-buffer"
Ctrl-Alt-\ runs the command "indent-region"
1.58. Adding an upstream repository to a forked repository¶
So, after forking a repository, it would be nice to be able to keep it synchronized. First, set up an upstream branch:
$ git clone git@github.com:kjcole/obs-midi.git
Cloning into 'obs-midi'...
remote: Enumerating objects: 106, done.
remote: Counting objects: 100% (106/106), done.
remote: Compressing objects: 100% (69/69), done.
remote: Total 5924 (delta 64), reused 69 (delta 37), pack-reused 5818
Receiving objects: 100% (5924/5924), 2.21 MiB | 6.04 MiB/s, done.
Resolving deltas: 100% (3778/3778), done.
$ cd obs-midi/
$ git remote -v
origin git@github.com:kjcole/obs-midi.git (fetch)
origin git@github.com:kjcole/obs-midi.git (push)
$ git remote add upstream git@github.com:cpyarger/obs-midi.git
$ git remote -v
origin git@github.com:kjcole/obs-midi.git (fetch)
origin git@github.com:kjcole/obs-midi.git (push)
upstream git@github.com:cpyarger/obs-midi.git (fetch)
upstream git@github.com:cpyarger/obs-midi.git (push)
Then at a later date, periodically lather, rinse, repeat:
$ git fetch upstream
$ git checkout main
$ git merge upstream/main
1.59. Searching for Unicode characters¶
My unprintables
and expletives
aliases (shown below) come in
handy for finding everything that is not pure, printable, easy to type
ASCII. But sometimes, just sometimes, I need to search for a specific
Unicode character. For example, a long dash. most
(or hexdump
if you prefer) can reveal that at least one variant of a long dash is
hexadecimal E2B8BA. To search for it with grep
use:
$ grep $'\xE2\xB8\xBA' *
The aforementioned and very handy aliases are:
$ alias unprintable='grep --color="auto" -P -n "[\x00-\x1E]"'
$ alias expletives='grep --color="auto" -P -n "[^\x00-\x7E]" '
$ alias decomment='egrep -v "^[[:space:]]*((#|;|//).*)?$" '
unprintables
searches for any lines containing control characters
(ASCII characters in the range 0 to 31). expletives
searches for
any characters beyond ASCII, i.e. NOT in the range 0 to 127. (And,
because it is probably my most used alias decomment
shows the
contents of files sans any comment lines using #
, ;
or //
as the comment delimiter.)
1.60. Running multiple commands in Bash¶
There are three different ways to combine commands in the terminal:
; Command 1 ; Command 2 Run command 1 first and then command 2
&& Command 1 && Command 2 Run command 2 only if command 1 ends sucessfully
|| Command 1 || Command 2 Run command 2 only if command 1 fails
1.61. Bash: for i in range()¶
Two different techniques:
Method 1:
for i in {0..10..1} do printf "%02d\n" $i # Print as a 2-digit number with leading zeros done
Method 2:
export END=10 for i in $(seq 0 $END) do printf "%02d\n" $i # Print as a 2-digit number with leading zeros done
1.62. Bash substrings¶
This little ditty goes through a directory tree looking for filenames
ending in “~
” and comparing them to the same filename NOT
ending in “~
”. In other words it quickly compares backup version
of each file with the current version of the file. Like so:
for i in $(find . -name "*~")
do
diff -u $i ${i:0:-1} | most
done
The new tidbit for me was the substring syntax:
If $i
is a variable, ${i:offset:length}
is the substring (e.g.
${i:1:2}
will give the 2nd and 3rd character), but using negative
values for the length parameter indicates how many letters should be
cut from the end. (Using negative values for the offset appears to
have no effect and behaves the same as zero.) Either offset
or
length
can be omitted:
$ x="This string"
$ echo ${x::5}
This
$ echo ${x:5}
string
Marco the Marvelous suggests an improvement: Just remove last character:
${i%?}
This one is just saying output $i
except remove the matching
pattern after %
, which is a single any character. A more specific
version of the previous one, since all your files end with a tilde:
"${i%\~}"
This one is just saying output $i
except for the matching pattern
\~
. (When escaped is just the ending tilde.)
For more fancy variable stuff, see GNU Bash - Shell Parameter Expansion
1.63. Compare two files ignoring whitespace AND newlines¶
Suppose you have two files:
$ cat file-A
The quick brown fox jumped over the lazy dog's back. Now is the time...
$ cat file-B
The quick brown fox jumped
over the lazy dog's back.
Now is the time.
The text is identical, but the spacing is different. How can you determine that? A search of the package repositories turns up:
dwdiff - diff program that operates word by word
docdiff - Compares two files word by word / char by char
icdiff - terminal side-by-side colorized word diff
numdiff - Compare similar files with numeric fields
rfcdiff - compares two internet draft files and outputs the difference
wdiff - Compares two files word by word
wdiff-doc - Documentation for GNU wdiff
all of which may be worth exploring. However, StackExchange provides, once more:
$ diff <( tr -d " \n" <file-A ) \
<( tr -d " \n" <file-B )
The command uses tr
to delete all spaces and newlines from each
file, and then feeds them as inputs to diff
.
NOTE: The whitespace in the command is necessary.
Better still, git
provides! git
has a --word-diff
option:
git diff --word-diff file-A file-B
1.64. Convert JPG to SVG in one swell foop¶
Easy (using ImageMagick as the middle-app):
convert -channel RGB -compress None input.jpg bmp:- | \
potrace -s - -o output.svg
1.65. Convert MIDI to MP3 in one swell foop¶
Thanks to StackOverflow - Convert midi to mp3 we have:
for i in *.midi
do
timidity $i -Ow -o - | \
ffmpeg -i - -acodec libmp3lame -ab 64k ${i:0:-5}.mp3
done
1.66. Expand MP4 to PNG frames¶
Sometimes, one just wants a single frame from a video:
ffmpeg -ss «timestamp» \
-i «source_movie».mp4 \
-t «duration» \
«destination_frames»_%04d.png
will produce a series of images, «destination_frames»_0000.png
through «destination_frames»_%9999.png
. The -ss «timestamp»
indicates the point from which the first frame should be extracted
(e.g. 00:00
to start at the beginning) and the -t «duration»
is the number of minutes and seconds to continue extracting
((e.g. 01:04
to get one minute and four seconds of frames).
The %04d
indicates that the generated frames should have filenames
that contain consecutive four-digit, zero-padded decimal number.
1.67. Running two applications in parallel¶
For this specific example, I wanted to have a GUI-based audio player play several tunes with an option for me to adjust the speed of playback, while simultaneously displayng a window with the sheet music for the corresponding tune. The magic that worked:
for i in polkas/john_ryans_polka \
polkas/peg_ryans_polka \
single_jigs-slides/denis_murphys_slide-no._1
do
evince $i.pdf & \
vlc $i.mp3 && \
fg
done
1.68. Create a new file with lines matching several patterns deleted¶
Suppose you have a mailing list, and you want to create a new mailing
list with several addresses excluded. Easy. fgrep
is your friend:
cat > remove
user1@host1.tld
user2@host2.tld
...
^D
fgrep -v -f remove source.csv > destination.csv
wc -l source.csv destination.csv remove
The fgrep
uses remove
as a file full of patterns to
match. (And the wc
is only there as a sanity check: The first
number should equal the sum of the second and third numbers.)
1.69. Making terminal command-line “movies”¶
The script
command will start a second Bash shell and record all
terminal I/O including user input, command output and ANSI escape
sequences for colorizing, cursor positioning, screen clearing, etc.
The default behavior is to save the data to a file named typescript
.
However, viewing such a file using standard tools is problematic:
Using cat typescript
shows the correct output, but it scrolls off
the screen too quickly. Using a pager such as most
prevents
scrolling but does not know how to interpret the ANSI escape
sequences, showing them as raw data, which drastically decreases the
readability of the script.
Enter scriptreplay
. By adding a timing file option to the
script
command, and then using that file with scriptreplay
the typescript
file will be displayed as if one was watching a
ghost user enter commands into a terminal: The timing file preserves
the original pauses that a user has while typing:
script -T timecodes
...
exit
produces two files: typescript
with the terminal I/O and timecodes
which contains the timing data for the keystrokes. By later typing:
scriptreplay -t timecodes
you can watch the “movie” in realtime.
The caveat: After issuing the scriptreplay
command, the first
thing you will see is the command prompt, making it appear as if the
scriptreplay
command has failed. This is an illusion: You are, in
fact, NOT seeing the command prompt: You are seeing the recording of a
command prompt: It is the first output in the typescript
file. Wait. Eventually, the “ghost user” will begin typing.
The above caveat can be mitigated by creating an alias for the
scriptreplay
command that adds in messages indicating that the
movie is about to start (or has just ended) and putting it into
.bash_aliases
, optionally adding an alias for script
as
well. The following aliases make script
automatically produce a
timecodes
file, and produce colorized (bright / bold red on
yellow) informative messages regarding the starting and ending of a
script replay to the scriptreplay
command:
alias script='script -T timecodes '
alias scriptreplay='clear ; echo -e "\e[1;43;31m### STARTING replay ###\e[0m\n" ; scriptreplay -t timecodes ; echo -e "\e[1;43;31m### FINISHED replay ###\e[0m" '
(That second alias is a bit long, and may eventually be changed to a
Bash function in .bashrc
. A wag of my acquainance, the goode
Mr. Flint suggested the name scriptease
. Clever. I like it.)
Ultimately, functions that use a special directory and time-stamp both the I/O data file an the timing file that pairs with it might be a good way to keep multiple recordings, rather than overwrite the files on each use. For example:
mkdir -p ~/scripts/$(date --iso)
mv typescript ~/scripts/$(date --iso)/
mv timecodes ~/scripts/$(date --iso)/
1.70. Reversing a video¶
To save a video that plays backwards:
ffmpeg -i input.mp4 -vf reverse output.mp4
Then with a tool like pitivi
you can append the output to the
input and get a nice looping effect.