Install ImageMagick on a Windows machine.
magick infile.EMF outfile.pdf
magick infile.EMF outfile.png
28.2.19
29.11.18
Regex to extract citations and other parenthetical expressions from text
Print out parenthetical expressions using sed:
sed 's/(/\n(/g' InputFile.txt | sed 's/)/)\n/g' | grep "^(";
Remove parenthetical expressions using perl:
perl -pe 's|(\(.*?\))||g' InputFile.txt;
sed 's/(/\n(/g' InputFile.txt | sed 's/)/)\n/g' | grep "^(";
Remove parenthetical expressions using perl:
perl -pe 's|(\(.*?\))||g' InputFile.txt;
26.11.18
Homebrew, Mac OS 10.11, Xcode 8, Clang, and the problem with thread_local
The version of the clang compiler that comes with Xcode 7 does not recognize the thread_local keyword. Packages installed with Homebrew that use certain c++11 or higher commands may terminate during compilation with errors like "thread-local storage is not supported for the current target". Xcode 8, which contains a suitable clang, cannot be run on MacOS 10.11. Catch-22.
To work around, install a new version of gcc using Homebrew:
brew install gcc;
Now figure out which version of gcc was installed by Homebrew:
brew list gcc;
If gcc 8 was installed you will see something like: /usr/local/Cellar/gcc/8.2.0/bin/gcc-8
in the list of brew-installed gcc programs. Now, tell Homebrew to use this newer version of gcc, instead of the old version of clang from xcode, to install your program. As an example, I use the package poppler:
brew install --cc=gcc-8 poppler;
To work around, install a new version of gcc using Homebrew:
brew install gcc;
Now figure out which version of gcc was installed by Homebrew:
brew list gcc;
If gcc 8 was installed you will see something like: /usr/local/Cellar/gcc/8.2.0/bin/gcc-8
in the list of brew-installed gcc programs. Now, tell Homebrew to use this newer version of gcc, instead of the old version of clang from xcode, to install your program. As an example, I use the package poppler:
brew install --cc=gcc-8 poppler;
23.5.18
Quick create swap file on Rocks 6 cluster
swapon -s; #check existing swap
dd if=/dev/zero of=/state/partition1/swapfile bs=1024 count=200000k; #create swapfile, 200000k x 1kb = 200GB
mkswap /state/partition1/swapfile; #define swapfile
swapon /state/partition1/swapfile; #activate swapfile
cp /etc/fstab /etc/fstabORIG; #back up current fstab
echo "/state/partition1/swapfile swap swap defaults 0 0" >> /etc/fstab; #make permanent
dd if=/dev/zero of=/state/partition1/swapfile bs=1024 count=200000k; #create swapfile, 200000k x 1kb = 200GB
mkswap /state/partition1/swapfile; #define swapfile
swapon /state/partition1/swapfile; #activate swapfile
cp /etc/fstab /etc/fstabORIG; #back up current fstab
echo "/state/partition1/swapfile swap swap defaults 0 0" >> /etc/fstab; #make permanent
16.5.18
Quick create RAID 0 scratch drive
Assume two new drives have been installed which show up as /dev/sdb and /dev/sdc in fdisk -l.
mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb /dev/sdc;
parted -s -a optimal /dev/md0 mklabel gpt; #assign a partition tree type
parted -s -a optimal /dev/md0 mkpart primary 0% 100%; #create a single partition containing the entire disk
parted /dev/md0 print; #show some specs
mkfs.ext4 /dev/md0; #define the file system for the partition /dev/md0
mkdir /scratch; #the drive will be called 'scratch'
mount -t ext4 /dev/md0 /scratch; #mount drive for all users.
chmod -R 777 /scratch; #allow rwx access to everybody
aa=$(blkid /dev/md0 | awk '{print $2}' | sed 's/\"//g'); #get UUID of new raid array /dev/md0
cp /etc/fstab /etc/fstabORIG; #backup original fstab
echo "$aa /scratch ext4 defaults 0 0" >> /etc/fstab; #add a line to fstab to automount
umount /dev/md0; #unmount the raid array to test fstab
mount -a; #run /etc/fstab to remount
df; #make sure /dev/md0 is there
mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb /dev/sdc;
parted -s -a optimal /dev/md0 mklabel gpt; #assign a partition tree type
parted -s -a optimal /dev/md0 mkpart primary 0% 100%; #create a single partition containing the entire disk
parted /dev/md0 print; #show some specs
mkfs.ext4 /dev/md0; #define the file system for the partition /dev/md0
mkdir /scratch; #the drive will be called 'scratch'
mount -t ext4 /dev/md0 /scratch; #mount drive for all users.
chmod -R 777 /scratch; #allow rwx access to everybody
aa=$(blkid /dev/md0 | awk '{print $2}' | sed 's/\"//g'); #get UUID of new raid array /dev/md0
cp /etc/fstab /etc/fstabORIG; #backup original fstab
echo "$aa /scratch ext4 defaults 0 0" >> /etc/fstab; #add a line to fstab to automount
umount /dev/md0; #unmount the raid array to test fstab
mount -a; #run /etc/fstab to remount
df; #make sure /dev/md0 is there
26.4.18
fsck and repair on reboot
Problem: Input/output errors observed on disk access. dmesg or cat /var/log/messages shows "Unrecovered read error".
Solution: Your hard disk is bad. At a minimum it has some bad sectors. You can try to repair it by forcing the utility fsck to run on reboot. You must be root to do this.
su;
touch /forcefsck; #the presence of this file tells system to fsck on boot
echo "-y" > /fsckoptions; #option to automatically repair errors encountered
reboot;
The system will remove these files after completing the fsck.
The Rocks version looks like this:
su;
ssh compute-0-1 'touch /forcefsck';
ssh compute-0-1 'echo -y > /fsckoptions';
ssh compute-0-1 'reboot';
exit;
If there are problems and the fsck won't complete, you may need to boot using a live disk and remove forcefsck and fsckoptions manually. This because the Rocks admin password may not work on a compute node.
Another approach is to search for bad blocks and write them to the 'bad block inode', so they will not be used in the future. This can be done interactively, e.g:
umount /dev/sda5;
e2fsck -ck /dev/sda5; #use badblocks read-only test to find bad blocks, faster.
or
e2fsck -cck /dev/sda5; #use badblocks read/write test to find bad blocks, slow.
Solution: Your hard disk is bad. At a minimum it has some bad sectors. You can try to repair it by forcing the utility fsck to run on reboot. You must be root to do this.
su;
touch /forcefsck; #the presence of this file tells system to fsck on boot
echo "-y" > /fsckoptions; #option to automatically repair errors encountered
reboot;
The Rocks version looks like this:
su;
ssh compute-0-1 'touch /forcefsck';
ssh compute-0-1 'echo -y > /fsckoptions';
ssh compute-0-1 'reboot';
exit;
If there are problems and the fsck won't complete, you may need to boot using a live disk and remove forcefsck and fsckoptions manually. This because the Rocks admin password may not work on a compute node.
Another approach is to search for bad blocks and write them to the 'bad block inode', so they will not be used in the future. This can be done interactively, e.g:
umount /dev/sda5;
e2fsck -ck /dev/sda5; #use badblocks read-only test to find bad blocks, faster.
or
e2fsck -cck /dev/sda5; #use badblocks read/write test to find bad blocks, slow.
24.10.17
Add new RAID 5 hard drive array to Rocks cluster without reboot, using MegaCli
These are some basic notes on how to add storage to your Rocks cluster. Everybody has a different situation, mine was Rocks 6.2, CentOS 6, five unoccupied hard drive bays in the head node. This procedure has been lifted from various places on the internet.
lspci | grep RAID; #determine if you have an LSI Logic RAID bus controller, if not stop here
fdisk -l; #list all devices, make a note of output
#add identical hard drives to all available bays
#login as root
cd /var/tmp;
wget http://techedemic.com/wp-content/uploads/2015/10/8-07-14_MegaCLI.zip; #download MegaCli, a command line utility to set up RAID
unzip 8-07-14_MegaCLI.zip;
cd Linux;
rpm -Uvh MegaCli-8.07.14-1.noarch.rpm; #install the package
alias MegaCli="/opt/MegaRAID/MegaCli/MegaCli64"; #set an alias to the executable for this session, add it to the root user .bashrc if you want it to be permanent
#some useful MegaCli commands to describe the situation
MegaCli -PDlist -aAll; #describe eligible drives
MegaCli -PDlist -aAll | grep -A2 "Enclosure Device"; #list the enclosure, slots, and their assignment to DiskGroups, i.e. Virtual drive for eligible drives
MegaCli -PDlist -aAll | grep 'Firmware state'; #status of eligible drives
MegaCli -LDInfo -Lall -a0; #list the virtual drives that have been defined thus far on adapter 0
MegaCli -PDlist -aAll | grep 'Foreign State'; #determine if any disks are in the foreign state, this may mean they were previously used on another machine, if so, clear them
MegaCli -CfgForeign -Clear -aAll; #clear any foreign configurations, verify using above command to check foreign state
#show mapping of physical drives to virtual drives
MegaCli -LdPdInfo -a0 | grep -E "Virtual Drive:|Slot Number:" | xargs | sed -r 's/(Slot Number:)(\s[0-9]+)/\2,/g' | sed 's/(Target Id: .)/Physical Drives ids:/g' | sed 's/Virtual Drive:/\nVirtual Drive:/g';
#from the above commands compile a list of values like:
Adapter ID: 0
Enclosure ID: 32
Physical Drive IDS (slots): 1,2,3,4,5
Raid Level: 5
#Basic command syntax to create the RAID is:
#MegaCli -CfgLdAdd -rX[enclosure_id:physical_id,enclosure_id:physical_id] -aN; #where X=RAID level, N=Adapter ID
#for the situation described above use the following to create a RAID 5 array from drives in slots 1-5 (slot 0 held the boot drive, don't mess with it)
MegaCli -CfgLdAdd -r5[32:1,32:2,32:3,32:4,32:5] -a0;
#confirm that you now have a new virtual drive, containing the physical drives you specified
MegaCli -LdPdInfo -a0 | grep -E "Virtual Drive:|Slot Number:" | xargs | sed -r 's/(Slot Number:)(\s[0-9]+)/\2,/g' | sed 's/(Target Id: .)/Physical Drives ids:/g' | sed 's/Virtual Drive:/\nVirtual Drive:/g'
MegaCli -LDInfo -Lall -a0; #list the virtual drives that have been defined thus far on adapter 0
fdisk -l; #list all devices, you should have a new one, mine appeared as /dev/sdc
#delete the virtual drive later if necessary using: MegaCli -CfgLdDel -Lx -aN
#make a partition
parted /dev/sdc print; #view disk specs
parted -s -a optimal /dev/sdc mklabel gpt; #assign a partition tree type
parted /dev/sdc print;
parted -s -a optimal /dev/sdc mkpart primary 0% 100%; #create a single partition containing the entire disk
parted /dev/sdc print;
mkfs.ext4 /dev/sdc1; #define the file system for the partition sdc1
parted /dev/sdc print;
#mount the drive to a shared location. Usually this is /export, which is generally just a symlink to /state/partition1. Those paths are used interchangeably below:
mkdir /export/space; #the drive will be called 'space'
mount -t ext4 /dev/sdc1 /export/space; #mount drive for all users. to unmount: umount /export/space
chown -R root:google-otp /export/space; #change ownership to google-otp group, which should include root and all users automatically
chmod -R 777 /export/space; #allow rwx access to everybody
#share drive to nodes, it will be accessible at /share/space:
cp /etc/exports /etc/exportsORIG; #preserve the original /etc/exports file
echo '/state/partition1/space 10.1.1.1(rw,async,no_root_squash) 10.1.0.0/255.255.0.0(rw,async)' >> /etc/exports; #add the shared drive description to /etc/exports
/etc/rc.d/init.d/nfs restart; #restart nfs
cp /etc/auto.share /etc/auto.shareORIG; #preserve the original /etc/auto.share file
lspci | grep RAID; #determine if you have an LSI Logic RAID bus controller, if not stop here
fdisk -l; #list all devices, make a note of output
#add identical hard drives to all available bays
#login as root
cd /var/tmp;
wget http://techedemic.com/wp-content/uploads/2015/10/8-07-14_MegaCLI.zip; #download MegaCli, a command line utility to set up RAID
unzip 8-07-14_MegaCLI.zip;
cd Linux;
rpm -Uvh MegaCli-8.07.14-1.noarch.rpm; #install the package
alias MegaCli="/opt/MegaRAID/MegaCli/MegaCli64"; #set an alias to the executable for this session, add it to the root user .bashrc if you want it to be permanent
#some useful MegaCli commands to describe the situation
MegaCli -PDlist -aAll; #describe eligible drives
MegaCli -PDlist -aAll | grep -A2 "Enclosure Device"; #list the enclosure, slots, and their assignment to DiskGroups, i.e. Virtual drive for eligible drives
MegaCli -PDlist -aAll | grep 'Firmware state'; #status of eligible drives
MegaCli -LDInfo -Lall -a0; #list the virtual drives that have been defined thus far on adapter 0
MegaCli -PDlist -aAll | grep 'Foreign State'; #determine if any disks are in the foreign state, this may mean they were previously used on another machine, if so, clear them
MegaCli -CfgForeign -Clear -aAll; #clear any foreign configurations, verify using above command to check foreign state
#show mapping of physical drives to virtual drives
MegaCli -LdPdInfo -a0 | grep -E "Virtual Drive:|Slot Number:" | xargs | sed -r 's/(Slot Number:)(\s[0-9]+)/\2,/g' | sed 's/(Target Id: .)/Physical Drives ids:/g' | sed 's/Virtual Drive:/\nVirtual Drive:/g';
#from the above commands compile a list of values like:
Adapter ID: 0
Enclosure ID: 32
Physical Drive IDS (slots): 1,2,3,4,5
Raid Level: 5
#Basic command syntax to create the RAID is:
#MegaCli -CfgLdAdd -rX[enclosure_id:physical_id,enclosure_id:physical_id] -aN; #where X=RAID level, N=Adapter ID
#for the situation described above use the following to create a RAID 5 array from drives in slots 1-5 (slot 0 held the boot drive, don't mess with it)
MegaCli -CfgLdAdd -r5[32:1,32:2,32:3,32:4,32:5] -a0;
#confirm that you now have a new virtual drive, containing the physical drives you specified
MegaCli -LdPdInfo -a0 | grep -E "Virtual Drive:|Slot Number:" | xargs | sed -r 's/(Slot Number:)(\s[0-9]+)/\2,/g' | sed 's/(Target Id: .)/Physical Drives ids:/g' | sed 's/Virtual Drive:/\nVirtual Drive:/g'
MegaCli -LDInfo -Lall -a0; #list the virtual drives that have been defined thus far on adapter 0
fdisk -l; #list all devices, you should have a new one, mine appeared as /dev/sdc
#delete the virtual drive later if necessary using: MegaCli -CfgLdDel -Lx -aN
#make a partition
parted /dev/sdc print; #view disk specs
parted -s -a optimal /dev/sdc mklabel gpt; #assign a partition tree type
parted /dev/sdc print;
parted -s -a optimal /dev/sdc mkpart primary 0% 100%; #create a single partition containing the entire disk
parted /dev/sdc print;
mkfs.ext4 /dev/sdc1; #define the file system for the partition sdc1
parted /dev/sdc print;
#mount the drive to a shared location. Usually this is /export, which is generally just a symlink to /state/partition1. Those paths are used interchangeably below:
mkdir /export/space; #the drive will be called 'space'
mount -t ext4 /dev/sdc1 /export/space; #mount drive for all users. to unmount: umount /export/space
chown -R root:google-otp /export/space; #change ownership to google-otp group, which should include root and all users automatically
chmod -R 777 /export/space; #allow rwx access to everybody
#share drive to nodes, it will be accessible at /share/space:
cp /etc/exports /etc/exportsORIG; #preserve the original /etc/exports file
echo '/state/partition1/space 10.1.1.1(rw,async,no_root_squash) 10.1.0.0/255.255.0.0(rw,async)' >> /etc/exports; #add the shared drive description to /etc/exports
/etc/rc.d/init.d/nfs restart; #restart nfs
cp /etc/auto.share /etc/auto.shareORIG; #preserve the original /etc/auto.share file
echo 'space YOURHEADNODENAME.local:/state/partition1/&' >> /etc/auto.share; #where YOURHEADNODENAME is just that
make -C /var/411; #update the 411 configuration
Subscribe to:
Comments (Atom)