I had a share folder in which users were continuously dropping MS Word Docs, a mix of .doc and .docx files. I needed to find a way to automatically convert those docs to text (.txt) so that they could be imported into a specialized database. I found the perfect solution at
http://blogs.technet.com/b/heyscriptingguy/archive/2008/11/12/how-can-i-convert-word-files-to-pdf-files.aspx .
It was easy to configure a Powershell script that I added a loop to and did the job perfectly. However; after running for a day I started to get errors and docs were not being converted. When I would try to manually open one of the problem docs, specifically .doc extensions in Word I would get a mswrd632-wpc error and have to click through the errors a couple times before the doc would open up. I was able to manually save the problem doc as a docx and then the script would process it. After some research I found this link:
http://helpdeskgeek.com/office-tips/word-cannot-start-the-converter-mswrd632-wpc/
I chose to delete the registry key at
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Text Converters\Import\MSWord6.wpc
and that fixed my issue.
According to the original link author the only issue this causes is that Word97 docs no longer open in Wordpad.
Powershell script as I modified it for my project:
#Powershell Convert DoC and DCOX to plain txt
#infinite loop for running conversion
while(1)
{
$wdFormatText = 2
$word = New-Object -ComObject word.application
$word.visible = $false
$folderpath = "c:\fso\*"
$fileTypes = "*.docx","*doc"
Get-ChildItem -path $folderpath -include $fileTypes |
foreach-object `
{
$path = ($_.fullname).substring(0,($_.FullName).lastindexOf("."))
"Converting $path to pdf ..."
$doc = $word.documents.open($_.fullname)
$doc.saveas([ref] $path, [ref]$wdFormatText)
$doc.close()
Remove-Item $_
}
$word.Quit()
Move-Item $folderpath\*.txt d:\someFolder
start-sleep - seconds 600
}
This is my Beer Fueled general purpose Sys Admin Blog, no fancy stuff just the fixes and other relevant things I come across as a Sys Admin. If there are better ways to handle some things I write about please share.
Tuesday, December 18, 2012
Sunday, December 16, 2012
Extend a Virtual Box Virtual drive
I manage some laptops that run a small application in an Oracle VirtualBox (VB) Virtual Machine. The VMs run Win 7 64bit and are built on a 60G fixed size vmdk drive. Recently it became evident that 60G is not large enough to the do job anymore. I had several options for moving this VM to a larger drive. I could build a new version of the VM from the ground up on a larger virtual drive. The next option I could use would be to image the VM with Clonezilla, Acronis etc then lay that image down on a larger drive and then if necessary use gparted to grow the system into the larger virtual drive. The last option was to use VB commands to grow the drive then use gparted to expand the OS to use the extra space. I chose the last option.
First my virtual disk was a 60gig fixed vmdk which VB can not expand, the disk needs to be dynamic vdi. Luckily VB has a way to deal with that situation using the “Virtual Media Manager”
In VB open the Virtual Media Manager from the File menu.
Select the target disk from the list
Click the copy button on toolbar
Click Next (here you can select a different drive if necessary)
Select VDI> Next
Select Dynamically Allocated>Next
Change copy name if necessary and or browse to a different folder
Click the copy button
Depending on the source virtual drive size this can take a while.
Next grow the copied virtual drive to the desired size: For example grow mydrive.vdi to 90Gig
C:\Program Files\Oracle\VirtualBox\VBoxManage.exe modifyhd --resize 90000 "C:\path\mydrive.vdi”
Next
Create a new vm in VB and select “use an existing drive” when creating the new system. Browse to your newly expanded drive and finish the configuration.
If you boot the newly created vm at this point Windows will still show the original size of the virtual disk and if you use disk manager you will see the new space as unallocated. You can use that unallocated space to make a new drive or you can use a tool such as GPARTED to grow the C:\ drive to use the space.
First my virtual disk was a 60gig fixed vmdk which VB can not expand, the disk needs to be dynamic vdi. Luckily VB has a way to deal with that situation using the “Virtual Media Manager”
In VB open the Virtual Media Manager from the File menu.
Select the target disk from the list
Click the copy button on toolbar
Click Next (here you can select a different drive if necessary)
Select VDI> Next
Select Dynamically Allocated>Next
Change copy name if necessary and or browse to a different folder
Click the copy button
Depending on the source virtual drive size this can take a while.
Next grow the copied virtual drive to the desired size: For example grow mydrive.vdi to 90Gig
C:\Program Files\Oracle\VirtualBox\VBoxManage.exe modifyhd --resize 90000 "C:\path\mydrive.vdi”
Next
Create a new vm in VB and select “use an existing drive” when creating the new system. Browse to your newly expanded drive and finish the configuration.
If you boot the newly created vm at this point Windows will still show the original size of the virtual disk and if you use disk manager you will see the new space as unallocated. You can use that unallocated space to make a new drive or you can use a tool such as GPARTED to grow the C:\ drive to use the space.
Thursday, December 6, 2012
Use RedHat Disk For Yum Repository
I wanted to use a REDHAT 6 install disk as a Yum Repository since I did not plan on registering the system I built for practice. Specifically I wanted to install x windows and a desktop after building the server without them.
Mount the install dvd:
mkdir /mnt/rhel_disc (use whatever you want but make sure it matches the config file below on the line named “baseurl”.)
mount /dev/cdrom /mnt/rhel_disc
Create the config file dvd.repo at /etc/yum.repos.d
and add these lines:
[base]
name=CDROM
baseurl=file:///mnt/rhel_disc/Server
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Thats it, now yum away.
Mount the install dvd:
mkdir /mnt/rhel_disc (use whatever you want but make sure it matches the config file below on the line named “baseurl”.)
mount /dev/cdrom /mnt/rhel_disc
Create the config file dvd.repo at /etc/yum.repos.d
and add these lines:
[base]
name=CDROM
baseurl=file:///mnt/rhel_disc/Server
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Thats it, now yum away.
Friday, November 9, 2012
Google CHROME SSL Certificate Issue
I decided to give the Google Chrome browser a try. After installing I found that I could not connect to Google Drive and other Google sites. I kept getting the red-box error:
"The site's security certificate is not trusted!..."
The biggest problem was there was just a single "BACK" button in the red-box there was no option to proceed anyway like in Firefox or IE. Basically if I couldn't get to Google apps online Chrome would be useless to me.
So I googled away at the issue and could not get a clear answer even from Google Groups etc. All the answers ran the gamut of sites being hijacked to my system being infected. I knew there was no problem with my system since I had no issues reaching Google Drive etc in Firefox. After a little more diligent searching I came across the hint here:
http://www.techrepublic.com/blog/google-in-the-enterprise/the-strange-case-of-the-google-certificate-roadblock/1292
It turns out I was missing a Windows root certificate update pushed earlier this year. See http://support.microsoft.com/kb/931125
I downloaded it after validating my system and that fixed my issue. You would think that Google Support would have this as one of the first things to check for this issue
"The site's security certificate is not trusted!..."
The biggest problem was there was just a single "BACK" button in the red-box there was no option to proceed anyway like in Firefox or IE. Basically if I couldn't get to Google apps online Chrome would be useless to me.
So I googled away at the issue and could not get a clear answer even from Google Groups etc. All the answers ran the gamut of sites being hijacked to my system being infected. I knew there was no problem with my system since I had no issues reaching Google Drive etc in Firefox. After a little more diligent searching I came across the hint here:
http://www.techrepublic.com/blog/google-in-the-enterprise/the-strange-case-of-the-google-certificate-roadblock/1292
It turns out I was missing a Windows root certificate update pushed earlier this year. See http://support.microsoft.com/kb/931125
I downloaded it after validating my system and that fixed my issue. You would think that Google Support would have this as one of the first things to check for this issue
sqlplus / as sysdba vrs sqlplus sys/pwd@sid as sysdba
Working with Ora10gXE on a Windows 2003 server I could not connect sqlplus / as sysdba but could as sqlplus sys/pword@sid as sysdba
Got ORA-12560:TNS:protocol adapter error".
After big searches on Google could not get a clear cut answer, everything trended towards not being able to sqlplus at all. I found a couple posts where someone specifically asked why they can do sqlplus sys/pwd@sid as sysdba but not sqlplus / as sysdba
But most of the answers where as if the user could not sqlplus at all.
Finally out of curiosity I echo’d %ORACLE_SID% and got nothing back.
So I added the SID to the environmental variables rebooted and the problem was fixed.
Not sure about the exact mechanics of the fix but I assume that when just connecting sqlplus / it is looking for the environmental variable vrs using the listener to make the connection?
Maybe someone can explain?
Got ORA-12560:TNS:protocol adapter error".
After big searches on Google could not get a clear cut answer, everything trended towards not being able to sqlplus at all. I found a couple posts where someone specifically asked why they can do sqlplus sys/pwd@sid as sysdba but not sqlplus / as sysdba
But most of the answers where as if the user could not sqlplus at all.
Finally out of curiosity I echo’d %ORACLE_SID% and got nothing back.
So I added the SID to the environmental variables rebooted and the problem was fixed.
Not sure about the exact mechanics of the fix but I assume that when just connecting sqlplus / it is looking for the environmental variable vrs using the listener to make the connection?
Maybe someone can explain?
AF 512e Hard Drive
I received a new Dell M6600 laptop with two drives a standard SATA and a SSD. The box the laptop came in had an ominous orange note saying one or more of my drives may be one the new the new Advanced Format 512e drives. The note pointed out that it is important to understand the implications of the new drive and how that relates to the OS to be installed.
The AF 512e drive is formatted using 4K sectors vrs the old school 512 bytes sector but emulates 512 for backwards compatibility with today’s OS’s. From what I found online Windows 7 and the newest Linux versions (like RHEL 6)support AF 512e natively but older Windows/Linux distros have to have special considerations, specifically ensuring that the partitions are aligned correctly.
I really wanted to know what drive I had since I intended on installing RHEL5.5 which would involve extra work to insure the drive was partitioned correctly to use AF512e. I also needed to know which drive(s) were AF512e.
<RANT>Anyway according to the Dell propaganda that came with the laptop I could download software called “Dell advanced Format HDD Detection Tool”. So I goto the link provided which leads to their Drivers and page where I put the system tag number and there is no such software available. I searched everywhere on Dell’s site and found nothing but links back to the driver page. </RANT>
Another option was to disassemble the laptop and check the drive for an “AF” symbol on the label. Luckily this info can be gained from the command line as well in Windows 7 using fsutil. The fsutil you use must be v3 which comes as part of KB9822018.
So I went ahead and ran the default Win7 setup that the new laptop had pre-installed and went to work. It turned out the SSD was an AF512e drive.
At the command prompt enter
fsutil fsinfo ntfsinfo c:
Look for these lines:
A standard 512 byte drive will show
“Bytes Per Sector : 512”
“Bytes Per Physical Sector : 512”
While a AF512e drive will return
“Bytes Per Sector : 512”
“Bytes Per Physical Sector : 4096”
So knowing that I had AF512e drive I decided to go with RHEL 6 instead of RHEL 5 since it could supposedly handle the drive.
The RHEL 6 install went without a hitch. And a quick fdisk -l /dev/sdb showed the the start sector for the partition was 2048 which is divisible by 512 so the partition seems to be aligned correctly as I have read online.
That part of the job was now done.
The AF 512e drive is formatted using 4K sectors vrs the old school 512 bytes sector but emulates 512 for backwards compatibility with today’s OS’s. From what I found online Windows 7 and the newest Linux versions (like RHEL 6)support AF 512e natively but older Windows/Linux distros have to have special considerations, specifically ensuring that the partitions are aligned correctly.
I really wanted to know what drive I had since I intended on installing RHEL5.5 which would involve extra work to insure the drive was partitioned correctly to use AF512e. I also needed to know which drive(s) were AF512e.
<RANT>Anyway according to the Dell propaganda that came with the laptop I could download software called “Dell advanced Format HDD Detection Tool”. So I goto the link provided which leads to their Drivers and page where I put the system tag number and there is no such software available. I searched everywhere on Dell’s site and found nothing but links back to the driver page. </RANT>
Another option was to disassemble the laptop and check the drive for an “AF” symbol on the label. Luckily this info can be gained from the command line as well in Windows 7 using fsutil. The fsutil you use must be v3 which comes as part of KB9822018.
So I went ahead and ran the default Win7 setup that the new laptop had pre-installed and went to work. It turned out the SSD was an AF512e drive.
At the command prompt enter
fsutil fsinfo ntfsinfo c:
Look for these lines:
A standard 512 byte drive will show
“Bytes Per Sector : 512”
“Bytes Per Physical Sector : 512”
While a AF512e drive will return
“Bytes Per Sector : 512”
“Bytes Per Physical Sector : 4096”
So knowing that I had AF512e drive I decided to go with RHEL 6 instead of RHEL 5 since it could supposedly handle the drive.
The RHEL 6 install went without a hitch. And a quick fdisk -l /dev/sdb showed the the start sector for the partition was 2048 which is divisible by 512 so the partition seems to be aligned correctly as I have read online.
That part of the job was now done.
Thursday, September 20, 2012
Group Policies Not Letting Me Edit IE9 Security Settings
Group Policies Not Letting Me Edit IE9 Security Settings
This problem has reared its ugly head more than once during the last few years. A user complains that some feature on a web site does not work properly for them. For example an active-x script will not run or a flash plugin does not function. So I go to set the security settings in IE but they are all greyed-out even for the Administrator. You can’t customize the security zones in IE or add to the trusted sites list for any-one site
This usually happens after the users bring their laptop somewhere where it is added to the local domain and policies pushed to it etc. As well intentioned as these policies are, they are a pain-in-the ass and cause more grief than they are worth for me. The problem is that it is not readily evident which Policies need to be disabled/adjusted to fix this. I usually recommend that these users stick to FireFox where possible.
After many Google searches on the topic and following recommendations describing which registry keys to edit and what policies to disable etc I finally stumbled on an answer that worked without fail.
Ref:
https://experts.missouristate.edu/display/csvhelpdesk/Trusted+Sites+in+Internet+Explorer+not+editable
Edit/disable these policies and you and your users can control IE as needed:
Local Computer Policy\Computer Configuration\Administrative Templates\Windows Components\Internet Explorer\
Security Zones Do not allow users to change policies
(Prevent your users from editing security zone settings. When enabled the Custom Level button and the security-level slider greyed out “)
Security Zones Do not allow users to add delete sites
(What it says)
Security Zones use only machine settings
(Determines whether Security Zones are controlled on a per user basis or or at the local machine level)
And the most important:
Local Computer Policy\Computer Configuration\Administrative Templates\Windows
Components\Internet Explorer\Internet Control Panel\Security Page\
site to zone assignment list
(This policy allow admins to use a GPO to populate the sites in the different IE security zones but when enabled in IE7+ it prevents users from editing the sites list)
This problem has reared its ugly head more than once during the last few years. A user complains that some feature on a web site does not work properly for them. For example an active-x script will not run or a flash plugin does not function. So I go to set the security settings in IE but they are all greyed-out even for the Administrator. You can’t customize the security zones in IE or add to the trusted sites list for any-one site
This usually happens after the users bring their laptop somewhere where it is added to the local domain and policies pushed to it etc. As well intentioned as these policies are, they are a pain-in-the ass and cause more grief than they are worth for me. The problem is that it is not readily evident which Policies need to be disabled/adjusted to fix this. I usually recommend that these users stick to FireFox where possible.
After many Google searches on the topic and following recommendations describing which registry keys to edit and what policies to disable etc I finally stumbled on an answer that worked without fail.
Ref:
https://experts.missouristate.edu/display/csvhelpdesk/Trusted+Sites+in+Internet+Explorer+not+editable
Edit/disable these policies and you and your users can control IE as needed:
Local Computer Policy\Computer Configuration\Administrative Templates\Windows Components\Internet Explorer\
Security Zones Do not allow users to change policies
(Prevent your users from editing security zone settings. When enabled the Custom Level button and the security-level slider greyed out “)
Security Zones Do not allow users to add delete sites
(What it says)
Security Zones use only machine settings
(Determines whether Security Zones are controlled on a per user basis or or at the local machine level)
And the most important:
Local Computer Policy\Computer Configuration\Administrative Templates\Windows
Components\Internet Explorer\Internet Control Panel\Security Page\
site to zone assignment list
(This policy allow admins to use a GPO to populate the sites in the different IE security zones but when enabled in IE7+ it prevents users from editing the sites list)
Tuesday, September 18, 2012
yum update fail: Cannot retrieve repository metadata (repomd.xml) for repository
12 Sept 12
On a new RHEL 6 Build freshly registered to a local SAT server I could not get a yum update.
kept getting error:
Cannot retrieve repository metadata (repomd.xml) for repository:
Please verify its path and try again
Did the usual yum clean commands but no help. Followed many suggestions on Google like verifying hostnames, host files, yum conf files, etc but none seemed to work.
Finally found a clue and checked this file /etc/sysconfig/rhn/up2date
Found the server URL and changed it from https to http and my problem was fixed.
Not sure what was up with that since I did not have this issue during the first install I did on that same system on the same day or again after I rebuilt it again that same day. In both those cases I registered and updated with no problems.
On a new RHEL 6 Build freshly registered to a local SAT server I could not get a yum update.
kept getting error:
Cannot retrieve repository metadata (repomd.xml) for repository:
Please verify its path and try again
Did the usual yum clean commands but no help. Followed many suggestions on Google like verifying hostnames, host files, yum conf files, etc but none seemed to work.
Finally found a clue and checked this file /etc/sysconfig/rhn/up2date
Found the server URL and changed it from https to http and my problem was fixed.
Not sure what was up with that since I did not have this issue during the first install I did on that same system on the same day or again after I rebuilt it again that same day. In both those cases I registered and updated with no problems.
RHEL 6 Desktop GUI Install
If you are building a RHEL 6 system and end up with a command line only load that is probably a good thing since why would you need a GUI to administer a linux server. By default RHEL 6 does not install a desktop unless you tell it to so.
There may be reasons why you would want a desktop on your RHEL 6 server, maybe you just like using GUI tools to administer the system. In my case I prefer the command line but need the GUI for end users.
The best time to add desktop support is during the build. During installation you will be prompted for the type of system you are building, i.e. basic server, web server, database server, etc. this is what sets up the packages that will be installed. There is also an option for a desktop system but what if you want a "basic server" with a GUI? By default the "basic server" will not have a desktop.
What to do? After setting your choice click the "custom" button on the bottom of the page go to the "Desktop" section and choose GNOME, KDE, etc, along with X windows. This will give you a desktop when the build is finished.
If you manage to build your server and forgot to add the desktop you can still add it via YUM. Get your server registered with a Satellite Server or RHN and group install GNOME or KDE along with X windows.
for example to install GNOME:
#yum groupinstall "X Window System" "Desktop"
Note: You will see many examples online saying
yum groupinstall "X Window System" "GNOME Desktop Environment"
However this will error out.
There may be reasons why you would want a desktop on your RHEL 6 server, maybe you just like using GUI tools to administer the system. In my case I prefer the command line but need the GUI for end users.
The best time to add desktop support is during the build. During installation you will be prompted for the type of system you are building, i.e. basic server, web server, database server, etc. this is what sets up the packages that will be installed. There is also an option for a desktop system but what if you want a "basic server" with a GUI? By default the "basic server" will not have a desktop.
What to do? After setting your choice click the "custom" button on the bottom of the page go to the "Desktop" section and choose GNOME, KDE, etc, along with X windows. This will give you a desktop when the build is finished.
If you manage to build your server and forgot to add the desktop you can still add it via YUM. Get your server registered with a Satellite Server or RHN and group install GNOME or KDE along with X windows.
for example to install GNOME:
#yum groupinstall "X Window System" "Desktop"
Note: You will see many examples online saying
yum groupinstall "X Window System" "GNOME Desktop Environment"
However this will error out.
Friday, August 3, 2012
Use VLC to Bulk Convert WMA to MP3
I had a pile of WMA files that I wanted to bulk convert to mp3 and I did not want to pay for some shareware solution. I settled on using a bash script that invoked VLC to do the job. I found several good examples online of how to do this:
http://jcandrioli.wordpress.com/2010/11/24/how-to-convert-wma-to-mp3-using-vlc-instead-of-mplayer/
http://wiki.videolan.org/Transcode
I settled on the videolan wiki as my method of choice.
To start you need to have VLC and ffmpeg installed; the first url has an example of getting those on Ubuntu based systems. After you have those you need to write a short bash file to do the work.
#!/bin/bash
acodec="mp3"
arate="128"
ext="mp3"
mux="ffmpeg"
vlc="/usr/bin/vlc"
fmt="wma"
for a in *$fmt; do
$vlc -I dummy -vvv "$a" --sout "#transcode{acodec=$acodec}:standard{mux=$mux,dst=\"$a.$ext\",access=file}" vlc://quit
done
Put the script in the folder with your wma files. From the command line make the script executable:
#chmod +x wma2mp3.sh
then execute
#./wma2mp3.sh
I did not use the ‘arate’ variable and the mp3’s sounded fine, however my wma files were from an audiobook so quality is not quite so important. Also this was not exactly fast but it works. The script was still running when I retired for the night, it was running at least an hour at that point. In all I converted 341 files averaging 4 Meg each.
The videolan wiki also gives batch script examples for doing this in Windows.
The one thing I did not like was that this process just appended .mp3 after the wma extension which offended my sensibilities. I will fix that later. In the meantime that can easily be addressed with the bulk file renamer included with my MINT linux distro.
http://jcandrioli.wordpress.com/2010/11/24/how-to-convert-wma-to-mp3-using-vlc-instead-of-mplayer/
http://wiki.videolan.org/Transcode
I settled on the videolan wiki as my method of choice.
To start you need to have VLC and ffmpeg installed; the first url has an example of getting those on Ubuntu based systems. After you have those you need to write a short bash file to do the work.
#!/bin/bash
acodec="mp3"
arate="128"
ext="mp3"
mux="ffmpeg"
vlc="/usr/bin/vlc"
fmt="wma"
for a in *$fmt; do
$vlc -I dummy -vvv "$a" --sout "#transcode{acodec=$acodec}:standard{mux=$mux,dst=\"$a.$ext\",access=file}" vlc://quit
done
Put the script in the folder with your wma files. From the command line make the script executable:
#chmod +x wma2mp3.sh
then execute
#./wma2mp3.sh
I did not use the ‘arate’ variable and the mp3’s sounded fine, however my wma files were from an audiobook so quality is not quite so important. Also this was not exactly fast but it works. The script was still running when I retired for the night, it was running at least an hour at that point. In all I converted 341 files averaging 4 Meg each.
The videolan wiki also gives batch script examples for doing this in Windows.
The one thing I did not like was that this process just appended .mp3 after the wma extension which offended my sensibilities. I will fix that later. In the meantime that can easily be addressed with the bulk file renamer included with my MINT linux distro.
Thursday, August 2, 2012
Retina Can't Make SSH Connection
I
could not successfully Retina scan a Red Hat 5x virtual server even
though I could fully scan other similarly configured servers on the same
ESXi platform. I was able to successfully ssh to the target from one
of the other VMs but could not establish an ssh connection via Retina.
There was nothing in the logs on the target showing failed connections
etc but the Retina logs showed that the connection attempt timed out.
After troubleshooting the target and the Retina server I came up with a clue at
“http://forums.eeye.com/index.php?/topic/2305-registry-error-threshold-exceeded/”
I ended up setting
HKEY_LOCAL_MACHINE\SOFTWARE\eEye\Retina\5.0\Settings\SSH\DataTimeout to 60, it was 15.
This fixed the problem. Why the one VM server had an issue with timeouts I do not know since it is configured the same as the other VMs I could scan.
For 64-bit systems use: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\eEye\Retina\5.0\Settings\SSH
After troubleshooting the target and the Retina server I came up with a clue at
“http://forums.eeye.com/index.php?/topic/2305-registry-error-threshold-exceeded/”
I ended up setting
HKEY_LOCAL_MACHINE\SOFTWARE\eEye\Retina\5.0\Settings\SSH\DataTimeout to 60, it was 15.
This fixed the problem. Why the one VM server had an issue with timeouts I do not know since it is configured the same as the other VMs I could scan.
For 64-bit systems use: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\eEye\Retina\5.0\Settings\SSH
Monday, July 30, 2012
Get Serial Numbers from the Command Line
Use dmidecode or wmic bios to remotely get serial numbers from your servers or from a PC that is buried under a desk etc.
For Linux/Unix use dmidecode and for Windows use wmic bios.
#dmidecode | grep “Serial Number”
c:\wmic bios get serialnumber
Run either command without arguments and you get much more than just serial numbers, for example you can get motherboard details and bios versions. If you are running some generic system it might not have a serial number available but Dell, HP, etc should return the info.
For Linux/Unix use dmidecode and for Windows use wmic bios.
#dmidecode | grep “Serial Number”
c:\wmic bios get serialnumber
Run either command without arguments and you get much more than just serial numbers, for example you can get motherboard details and bios versions. If you are running some generic system it might not have a serial number available but Dell, HP, etc should return the info.
Sunday, July 29, 2012
Mount foreign LVM partition from USB drive to recover a VM
Ok got the urge to play Riven but couldn't install it on a Windows 7 64bit system since it is an old 32bit game. No biggie I kind of suspected that so I decided to go Virtual. My Win7 system is the Home version and it does not have XP mode available so I settled on Oracle Virtual Box. I loaded Virtual Box and was just getting ready to break out the 32bit XP install disk when I decided to make the job harder than it needed to be.
Just a few months ago on the same PC I was running Fedora 16 on a 1TB disk that was dual partitioned for Linux EXT3/LVM2 and NTFS. Since then the system was rebuilt with a new Drive running Win7 64bit. I was also running Virtual Box on the Fedora 16 system and had a fully configured XP VM already setup and updated to that point in time. So instead of starting from scratch I decided to see if I could get that VM off the old Fedora system.
I put the old drive in one of those USB Drive docks and hooked it to my Win 7 system then tried to access it via a couple versions of Linux Live running on a Virtual Box vm but had no luck in mounting the partitions so I decided to use my full time Fedora 17 system to do the job.
On my Fedora 17 system I hooked up the USB drive had some problems seeing the new drive so I rebooted and all was well. To get to the files I needed to get into the lvm partition on the disk. So here are the steps based on LVM2 (LVM1 slightly different):
#pvscan with this command I could verify that I can see the old lvm and what name it had
In my case pvscan returned my target as vg_fedormedia on /dev/sde5
To import the volume group you need to export and import the group.
#umount /dev/blah/blah First it needs to be unmounted which mine was not.
#vgchange -an vg_fedormedia Marks the group as inactive
#vgexport vg_fedormedia Exports the group
#vgimport vg_fedormedia Imports the group
#vgchange -ay vg_fedormedia Activates the group
Now I was ready to mount the directory holding my XP VM
#mkdir /mnt/oldXP
#mount /dev/vg_fedormedia/lv_home /mnt/oldXP
Once mounted I could navigate to the target folder and there was my XP VM. Now I needed to copy the VM to removable media. Since the drive in the usb-dock drive was dual partitioned with NTFS I decided to copy the VM to the NTFS partition. Fedora had already mounted the NTFS partition so all I had to do was use Nautilus to copy from the mounted directory to the NTFS partition and I was done. Essentially what I did was copy the VM off the linux partition on the drive back to the NTFS partition on the same drive.
After that I unmounted the old home and deactivated the volume group
#umount /dev/vg_fedormedia/lv_home
# vgchange -an vg_fedormedia (not sure if that was necessary but did all the same.)
In all it took about 30-40 minutes to get the 7gig VM off the old drive and copied back to the NTFS partition and another 8 minutes or so to copy it to the Win 7 box. This still saved me the many hours it would take to build a new XP VM and download all the updates.
To import the XP VM into Virtual Box:
Start a new VM make sure it is set to Windows and XP give a name (must be different than the the VM you are importing or you get an error).
Set the RAM, 1024M in my case
Select use existing Hard Disk navigate to the folder where located select and create.
Done
Just a few months ago on the same PC I was running Fedora 16 on a 1TB disk that was dual partitioned for Linux EXT3/LVM2 and NTFS. Since then the system was rebuilt with a new Drive running Win7 64bit. I was also running Virtual Box on the Fedora 16 system and had a fully configured XP VM already setup and updated to that point in time. So instead of starting from scratch I decided to see if I could get that VM off the old Fedora system.
I put the old drive in one of those USB Drive docks and hooked it to my Win 7 system then tried to access it via a couple versions of Linux Live running on a Virtual Box vm but had no luck in mounting the partitions so I decided to use my full time Fedora 17 system to do the job.
On my Fedora 17 system I hooked up the USB drive had some problems seeing the new drive so I rebooted and all was well. To get to the files I needed to get into the lvm partition on the disk. So here are the steps based on LVM2 (LVM1 slightly different):
#pvscan with this command I could verify that I can see the old lvm and what name it had
In my case pvscan returned my target as vg_fedormedia on /dev/sde5
To import the volume group you need to export and import the group.
#umount /dev/blah/blah First it needs to be unmounted which mine was not.
#vgchange -an vg_fedormedia Marks the group as inactive
#vgexport vg_fedormedia Exports the group
#vgimport vg_fedormedia Imports the group
#vgchange -ay vg_fedormedia Activates the group
Now I was ready to mount the directory holding my XP VM
#mkdir /mnt/oldXP
#mount /dev/vg_fedormedia/lv_home /mnt/oldXP
Once mounted I could navigate to the target folder and there was my XP VM. Now I needed to copy the VM to removable media. Since the drive in the usb-dock drive was dual partitioned with NTFS I decided to copy the VM to the NTFS partition. Fedora had already mounted the NTFS partition so all I had to do was use Nautilus to copy from the mounted directory to the NTFS partition and I was done. Essentially what I did was copy the VM off the linux partition on the drive back to the NTFS partition on the same drive.
After that I unmounted the old home and deactivated the volume group
#umount /dev/vg_fedormedia/lv_home
# vgchange -an vg_fedormedia (not sure if that was necessary but did all the same.)
In all it took about 30-40 minutes to get the 7gig VM off the old drive and copied back to the NTFS partition and another 8 minutes or so to copy it to the Win 7 box. This still saved me the many hours it would take to build a new XP VM and download all the updates.
To import the XP VM into Virtual Box:
Start a new VM make sure it is set to Windows and XP give a name (must be different than the the VM you are importing or you get an error).
Set the RAM, 1024M in my case
Select use existing Hard Disk navigate to the folder where located select and create.
Done
Red Hat Satellite Server satellite sync fails DATA_TBS
I was running a satellite-sync on a Red Hat Satellite Server v5.4 when I ran into an Oracle error: 'ORA-01654: unable to extend index ... in tablespace DATA_TBS'. Turns out that I needed to extend the DATA_TBS table in Oracle, which was an easy fix.
I picked up the info at http://www.redhat.com/magazine/023sep06/features/tips_tricks/?intcmp=bcm_edmsept_007
You should backup the database first but since this satellite-server is a VMWare VM I simply snapshot the server first.
First kill all satellite related processes, this also stops Oracle: #rhn-satellite stop
Once the shutdown is done restart oracle then su to the oracle user:
#service oracle start
#su - oracle
Then run a db-control report:
$db-control report
The DATA_TBS table had only 200M space so I decided to grow that to over a gig. There probably is a correct size but a short search in Red Hat docs did not make it clear. The extend command only adds about 500M at a time to the tablespace so I ran it a couple times then exited from oracle’s login:
$db-control extend DATA_TBS
$exit
After that was accomplished I restarted rhn-satellite. Since oracle was already running a Bitchin-Betty message appeared saying satellite might have crashed, this can be ignored:
#rhn-satellite start.
I picked up the info at http://www.redhat.com/magazine/023sep06/features/tips_tricks/?intcmp=bcm_edmsept_007
You should backup the database first but since this satellite-server is a VMWare VM I simply snapshot the server first.
First kill all satellite related processes, this also stops Oracle: #rhn-satellite stop
Once the shutdown is done restart oracle then su to the oracle user:
#service oracle start
#su - oracle
Then run a db-control report:
$db-control report
The DATA_TBS table had only 200M space so I decided to grow that to over a gig. There probably is a correct size but a short search in Red Hat docs did not make it clear. The extend command only adds about 500M at a time to the tablespace so I ran it a couple times then exited from oracle’s login:
$db-control extend DATA_TBS
$exit
After that was accomplished I restarted rhn-satellite. Since oracle was already running a Bitchin-Betty message appeared saying satellite might have crashed, this can be ignored:
#rhn-satellite start.
Saturday, July 28, 2012
How to Hide a Drive in Windows 7
I needed to hide a drive from users on a Win7 baseline This will probably work on other Windows versions as well if they support the policy.
To hide a drive from users edit this group policy
User Configuration> Administrative Templates> Windows Components > Windows Explorer>
Click "Hide these specified drives in My Computer" then pick the option you want in the drop-down box.
This only hides the drive in "My Computer/Explorer" and a determined user can get to the drive if they want so make sure you have permissions set correctly. For me the main goal was to just not make it easy for users to see the drive.
To hide a drive from users edit this group policy
User Configuration> Administrative Templates> Windows Components > Windows Explorer>
Click "Hide these specified drives in My Computer" then pick the option you want in the drop-down box.
This only hides the drive in "My Computer/Explorer" and a determined user can get to the drive if they want so make sure you have permissions set correctly. For me the main goal was to just not make it easy for users to see the drive.
Can't Boot Windows 7 After Clonezilla Re-image
After re-imaging a Dell laptop with Clonezilla I could not get it to boot. I have used the same Win7 image on many systems so I knew it was good.
The first thing I did was boot to a Windows 7 rescue disk and let it try to auto fix the problem which ended up making it worse. I should have checked the BIOS setting and active partition first as described next and then I think Windows would have rebooted no problem. But when I ran the auto fix it rewrote the BCD file based on bogus info and broke it.
Here are the steps I used to troubleshoot and fix:
Verified the SATA settings in the BIOS, turned out it was set to RAID and should have been AHCI. After the BIOS fix I still could not boot so I ran the Win7 Rescue disk auto-repair again which failed. From there I called up the repair console command window. First thing I checked was whether the boot partition was active with diskpart At the prompt type diskpart and enter then you will be in a diskpart console with a '>' prompt
>list disk
Tells you what disks and their id number starting at disk 0
>select disk 0
Selects your target disk
>list partition
Tells you what partitions are on the drive in my case 2. I had a larger system partition which was partition 1 and a very small boot (BCD) partition which was partition 2.
>select partition 2
Selects your target partition
>detail partition
Gives details about the partition including whether it is hidden or active. In my case the partition was visible but not active. I needed to set it to active
>active
This is what you type to set the partition active
>exit quit or whatever
At this point a reboot and now Windows 7 tried to boot and failed so I went into recovery mode. I let it try to auto fix the system but it failed so back to the command line.
Aat the command line I tried these steps:
BOOTREC /FIXMBR result good
BOOTREC /FIXBOOT result good
BOOTREC /SCANOS bad total identified windows installations 0
BOOTREC /REBUILDBCD bad total identified windows installations 0
This did not work so I decided to look at the boot configuration using bcdedit
at the command line type bcdedit and enter
Results something like this:
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Windows\system32>bcdedit
Windows Boot Manager
--------------------
identifier {bootmgr}
device partition=\Device\HarddiskVolume2
description Windows Boot Manager
locale en-US
inherit {globalsettings}
default {current}
resumeobject {0542b908-aad8-11e1-a78f-ce537be42191}
displayorder {current}
toolsdisplayorder {memdiag}
timeout 30
Windows Boot Loader
-------------------
identifier {current}
device partition=D:
path \Windows\system32\winload.exe
description Windows 7
locale en-US
inherit {bootloadersettings}
recoverysequence {0542b90a-aad8-11e1-a78f-ce537be42191}
recoveryenabled Yes
osdevice partition=C:
systemroot \Windows
resumeobject {0542b908-aad8-11e1-a78f-ce537be42191}
nx OptIn
In my case I found that the Windows boot loader was pointing to d: not c:. I knew it should be c: so I used bcdedit to fix.
Bcdedit /set {Default} device partition=C:
Once done I rebooted successfully to Win7
Done
Here are the steps I used to troubleshoot and fix:
Verified the SATA settings in the BIOS, turned out it was set to RAID and should have been AHCI. After the BIOS fix I still could not boot so I ran the Win7 Rescue disk auto-repair again which failed. From there I called up the repair console command window. First thing I checked was whether the boot partition was active with diskpart At the prompt type diskpart and enter then you will be in a diskpart console with a '>' prompt
>list disk
Tells you what disks and their id number starting at disk 0
>select disk 0
Selects your target disk
>list partition
Tells you what partitions are on the drive in my case 2. I had a larger system partition which was partition 1 and a very small boot (BCD) partition which was partition 2.
>select partition 2
Selects your target partition
>detail partition
Gives details about the partition including whether it is hidden or active. In my case the partition was visible but not active. I needed to set it to active
>active
This is what you type to set the partition active
>exit quit or whatever
At this point a reboot and now Windows 7 tried to boot and failed so I went into recovery mode. I let it try to auto fix the system but it failed so back to the command line.
Aat the command line I tried these steps:
BOOTREC /FIXMBR result good
BOOTREC /FIXBOOT result good
BOOTREC /SCANOS bad total identified windows installations 0
BOOTREC /REBUILDBCD bad total identified windows installations 0
This did not work so I decided to look at the boot configuration using bcdedit
at the command line type bcdedit and enter
Results something like this:
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Windows\system32>bcdedit
Windows Boot Manager
--------------------
identifier {bootmgr}
device partition=\Device\HarddiskVolume2
description Windows Boot Manager
locale en-US
inherit {globalsettings}
default {current}
resumeobject {0542b908-aad8-11e1-a78f-ce537be42191}
displayorder {current}
toolsdisplayorder {memdiag}
timeout 30
Windows Boot Loader
-------------------
identifier {current}
device partition=D:
path \Windows\system32\winload.exe
description Windows 7
locale en-US
inherit {bootloadersettings}
recoverysequence {0542b90a-aad8-11e1-a78f-ce537be42191}
recoveryenabled Yes
osdevice partition=C:
systemroot \Windows
resumeobject {0542b908-aad8-11e1-a78f-ce537be42191}
nx OptIn
In my case I found that the Windows boot loader was pointing to d: not c:. I knew it should be c: so I used bcdedit to fix.
Bcdedit /set {Default} device partition=C:
Once done I rebooted successfully to Win7
Done
Labels:
bcdedit,
boot,
bootrec,
Clonezilla,
rescue disk,
Windows 7
Sunday, July 8, 2012
Fedora 17 XFCE Desktop Corrupt
Out the blue applications lost the maximize, close, reduce buttons in the top right of Window. New terminal Windows start at top left of screen and can't be moved.
The chooser applet at the bottom of screen was gone and the 4 desktop buttons were gone. Also terminal windows and menu options in all apps would loose focus the second the mouse was moved so I could not type in a command window or pick a menu option A quick Google gave a quick answer to the fix. I got a crippled terminal window opened and typed:
$sudo xfwm4
And the problem was fixed The command instantly fixed the issues but there were some residual error messages that I ignored since my goal was met.
xfwm4 is the xfce window manager and "is responsible for the placement of windows on the screen, provides the window decorations and allows you for instance to move, resize or close them."http://docs.xfce.org/xfce/xfwm4/introduction"
$sudo xfwm4
And the problem was fixed The command instantly fixed the issues but there were some residual error messages that I ignored since my goal was met.
xfwm4 is the xfce window manager and "is responsible for the placement of windows on the screen, provides the window decorations and allows you for instance to move, resize or close them."http://docs.xfce.org/xfce/xfwm4/introduction"
Thursday, June 21, 2012
Window Restore Utility Windows 2008 R2
Needed to move user Roaming Profiles from W2K3 FS to W2K8 FS. Used NT Backup utility to back up the profiles on the old server in order to keep correct permission etc only to find that the new Backup utility on W2K8 does not work with old NT utility backups. With a quick Google search found that MS supplies a pared down version of the NT Backup Utility for use on Vista/2008 etc called the "Restore Utility" which only does a restore of backup files.
http://www.microsoft.com/en-us/download/details.aspx?id=4220
MS also says that the Removable Storage Manager also needed to be turned on in Server Features for this to work. When I tried to find that feature on my W2K8 server it was not there. So back to Google where I found that this feature is not available on W2K8R2. After another quick Google query I found the solution:
http://www.microsoft.com/en-us/download/confirmation.aspx?id=24057
This is kb974674 and provides a version of the restore utility for W2K8R2.
http://www.microsoft.com/en-us/download/details.aspx?id=4220
MS also says that the Removable Storage Manager also needed to be turned on in Server Features for this to work. When I tried to find that feature on my W2K8 server it was not there. So back to Google where I found that this feature is not available on W2K8R2. After another quick Google query I found the solution:
http://www.microsoft.com/en-us/download/confirmation.aspx?id=24057
This is kb974674 and provides a version of the restore utility for W2K8R2.
Subscribe to:
Comments (Atom)