Thread: [Linuxcommand-announce] [LinuxCommand.org: Tips, News And Rants] Project: Getting Ready For Ubuntu
Brought to you by:
bshotts
|
From: William S. <bs...@pa...> - 2010-03-09 07:29:24
|
As you have probably heard, the next release of Ubuntu, 10.04 ("Lucid
Lynx") will occur during the final days of April 2010. My production
systems (the ones on which I do my writing and photography) are running
Ubuntu 8.04 and I have decided to upgrade them to the upcoming version.
This is the first of five-part series that will document my transition
to the new version.
The 10.04 version, like the earlier 8.04 and 6.06 releases, is a
so-called LTS or Long Term Support version of Ubuntu. This means that
it receives security and bug fixes (but not application version
upgrades) for a period of three years. This differs from the usual
eighteen month support period for ordinary Ubuntu releases. I have used
the LTS versions for several years and feel that it is the best choice
for my production systems. I use a lot of Linux distros in my work, but
for the machines I must rely on, I choose stability over the latest
features. For example, my server systems are running CentOS 4 which
first appeared in early 2005 and is still supported by Red Hat and the
CentOS team. In fact, the main reason I switched from Red Hat (Fedora)
to Ubuntu for my desktop systems was the availability of the Long Term
Support versions, a feature that Fedora does not offer.
The Opportunity
In a past life, I ran the QA department of a software company and I
often employ these skills to perform software testing on new Linux
releases. This case will be no different. The first beta release of
10.04 is scheduled for March 18 so we will begin our work then. Testing
is not just something I do for fun (it isn't) but it's important to
look for problems that might interfere with the deployment. By checking
for problems now, we have a better chance of getting them fixed before
the final release.
The Mission
Our mission is to upgrade the production systems while preserving the
existing data and functionality of the current systems. We'll also look
for exciting new features and applications that will enhance their
productive capacity. We will probably do a little scripting and system
administration along the way, too :-)
The Players
The two production systems involved are my Dell Inspiron 530N (which
originally shipped with Ubuntu 8.04 factory installed) and my IBM
ThinkPad T41 laptop. We'll also use my main test computer, a Dell
Dimension 2400N which is currently hosting our All-Text Linux
Workstation. We might also take a look at the 10.04 Netbook Remix
version to see if it offers any compelling reasons for upgrading my two
netbooks, which are now running 9.04 UNR.
Stay tuned. This ought to be fun.
Further Reading
- Ubuntu LTS
- Lucid Lynx Release Schedule
- Lucid Lynx Technical Overview
--
Posted By William Shotts to LinuxCommand.org: Tips, News And Rants at
3/09/2010 02:29:00 AM |
|
From: William S. <bs...@pa...> - 2010-03-16 08:41:22
|
Last time, we announced our intention to upgrade some of our systems to Ubuntu 10.04, so what's next? To paraphrase a sight gag from an early episode of South Park: In this installment we are going to cover the planning phase of the upgrade process. Good planning is often the difference between a good upgrade experience experience and a bad one. As a computing environment becomes more complex, planning becomes more essential. My environment is fairly complex so I have to plan. Objective The first element to any good plan (and not just for OS upgrades) is a clear objective. That is, in the broadest terms, what do we hope to accomplish? In my case, I want to install Ubuntu 10.04 on two computers (my main desktop system and my laptop) currently running Ubuntu 8.04 while maintaining all of the application set (and capabilities), network integration, and user data. Notice that I said install and not upgrade. I have learned through many years of experience that upgrades don't really work. They kind of work sometimes, but they never really work. Often, an OS upgrade is not able to apply every new feature in the new version. For example, converting an existing file system to a new file system type is often impossible. Also, I don't want to reuse my existing configuration and settings files. I want to reconfigure based on the new default configuration, again to take advantage of everything the new release has to offer. I know (from scanning the forums) that a lot of people are content to just pop in an installation CD and push the upgrade button and hope for the best. Then those same people start crying because they get a black screen when they reboot, or their wireless stops working, or their sound is busted, etc., etc., etc. System Survey Since our objective states that we have to maintain the application set, network integration and user data, we better figure out what that is. We do that by performing an inventory of our existing system Here are some things to keep an eye on: - Required Applications. When performing your system survey, go through the application menus and list everything you rely on. In an upcoming installment we will write a script that will prepare a package list to compare against the new installation to ensure that we reinstall all the apps that we care about. If you use Firefox, pay special attention to bookmarks, stored passwords, and add-ons. - Network Services. My workstations mount file systems on NFSv4 and Samba shares. I also use an LDAP database to manage my address book. - Hardware Features. We will want to make sure that all of our hardware is working. During the testing phase we will cover video, sound, wired and wireless networking, CD/DVD reading and writing, printing, scanners/cameras, and USB devices This is also a good time to do some system maintenance. Between cleanings, computers get a lot of application and data buildup. I have added many applications to my base system, some of which I use and others which I don't. It's good to make a list and decide what you really want on your "new" computer and what you can live without. The same goes for data. Do you really need all those video files? See if you can clear a few gigabytes off that disk. It will make things easier later. Testing We're going to get involved with Ubuntu 10.04 starting with the Beta 1 release scheduled for release on March 18. I have a test computer prepared where I will attempt to build approximations of the finished systems. In doing so, I will be able to see what, if anything, blocks my desired configuration. It also provides a chance to look at the new features in this release, as well as alternate applications. We will also use live CDs to test the hardware support on the real systems. Installation Once the final release of Ubuntu 10.04 occurs on or about April 29, we should be ready to install. We'll make our installation media, create and verify our final system backup, perform the installation, and restore any additional applications. Configuration The final phase of the project is adjusting the configuration of the new system to our liking. This will involve recreating accounts and application settings. We will also restore the user data from our backups. After everything is restored and configured we will verify that everything is in order. The very last step is to re-enable the system backups and begin using the systems for production use. -- Posted By William Shotts to LinuxCommand.org: Tips, News And Rants at 3/16/2010 04:19:00 AM |
|
From: William S. <bs...@pa...> - 2010-03-23 19:36:13
|
Now that Ubuntu 10.04 Beta 1 has been released, it's time to start our work. In this installment we will obtain a copy of Beta 1, make some installation media, install it, and begin our testing. Getting The Beta 1 Image This page has links to the ISO images that we will use. Of course, you could just download them from your web browser, but what's the fun in that? Since we are command line junkies here at LinuxCommand.org, we use the command line to download our images. You can do it like this: me@linuxbox: ~$ wget url where url is the web address of the ISO image we want to download In my case, I used this command to get the "PC (Intel x86) Desktop CD": wget http://releases.ubuntu.com/10.04/ubuntu-10.04-beta1-desktop-i386.iso Creating Installation Media The next step is making the installation media. I always use re-writable media for this kind of work, so we have to first "blank" our CD, then write the image on it. To do this, we use the wodim program. First, we need to determine what our system calls the CD burner. We can do this with the following command: me@linuxbox: ~$ wodim --devices wodim will execute and print a list of the optical media drives it sees. The results will look like this: wodim: Overview of accessible drives (1 found) : ------------------------------------------------------------------------- 0 dev='/dev/scd0' rwrw-- : 'Optiarc' 'DVD+-RW AD-7200S' ------------------------------------------------------------------------- On my system, we see that the CD-ROM drive/burner is the device "/dev/scd0". Yours may be different. insert the re-writable disk into the drive. If your system automatically mounts the disk, unmount it with a command such as this: me@linuxbox: ~$ sudo umount /dev/scd0 Next, we blank the media with this command: me@linuxbox: ~$ wodim -vv dev=/dev/scd0 blank=all The blanking operation will take several minutes. After it completes, we can write the image with this command: me@linuxbox: ~$ wodim -v dev=/dev/scd0 -data ubuntu-10.04-beta1-desktop-i386.iso After the write is completed, we need to verify that the disk matches the ISO file. Using this command will do the trick: me@linuxbox: ~$ md5sum ubuntu-10.04-beta1-desktop-i386.iso /dev/scd0 7ddbfbcfcc562bae2e160695ec820e39 ubuntu-10.04-beta1-desktop-i386.iso 7ddbfbcfcc562bae2e160695ec820e39 /dev/scd0 If the two checksums match, we have a good burn. Installation Depending on which variety of 10.04 you have downloaded (desktop, alternate, etc.), the installation procedure should be familiar to any Ubuntu user. The live desktop version differs from previous versions in that it no longer prompts you for running live or installing immediately after booting, rather you are forced to wait (and wait...) for the entire live CD to come up before being prompted with a graphical screen. Not an improvement, in my opinion. After installation, the first thing we do is open a terminal and perform an update to the system using the following commands: sudo apt-get update sudo apt-get upgrade Be aware that during the testing period, the Ubuntu team releases a steady stream of updates. It is not unusual for a hundred or more package updates to be released each day during periods of heavy development. I actually created this alias and put it in my .bashrc file on the test machine: alias update='sudo apt-get update && sudo apt-get upgrade' Now I just have to type "upgrade" to bring the machine up to date. Paying For Your Software - Testing This is a theme I have touched on before. If you have been an avid Linux consumer, you should consider becoming an avid Linux producer. Great software doesn't write itself. There are many ways you can help build the future of computing (and by the way, cheer leading is not one of them). One way is by performing good software testing. I have included some links (below) that document some of the tools and techniques that Ubuntu recommend for testing and bug reporting. Meanwhile, Back At The Ranch... Work continues on cleaning up the production systems in preparation for the upgrade. I also performed live CD tests on both systems to look for possible hardware incompatibilities. I haven't found any on the desktop system (yet) and the laptop has some minor video issues when booting. Work will continue. Further Reading 10.04 Beta 1 Release Notes: - http://www.ubuntu.com/testing/lucid/beta1 Some advice on CD/DVD burning: - https://help.ubuntu.com/community/BurningIsoHowto - https://help.ubuntu.com/community/CdDvd/Burning - Chapter 16 of The Linux Command Line covers various kinds of storage media. Tips and techniques for software testers: - https://wiki.ubuntu.com/Testing - https://help.ubuntu.com/community/ReportingBugs - https://wiki.ubuntu.com/DebuggingProcedures Other installments in this series: - Project: Getting Ready For Ubuntu 10.04 -- Posted By William Shotts to LinuxCommand.org: Tips, News And Rants at 3/23/2010 03:35:00 PM |
|
From: William S. <bs...@pa...> - 2010-05-11 22:33:23
|
Despite my trepidations, I'm going to proceed with the upgrade to
Ubuntu 10.04. I've already upgraded my laptop and with Sunday's release
of an improved totem movie player, the one "show stopper" bug has been
addressed. I can live with/work around the rest. The laptop does not
contain much permanent data (I use it to write and collect images from
my cameras when I travel) so wiping the hard drive and installing a new
OS is not such a big deal. My desktop system is another matter. I store
a lot of stuff on it and have a lot of software installed, too. I've
completed my testing using one of my test computers verifying that all
of the important apps on the system can be set up and used in a
satisfactory manner, so in this installment we will look at preparing
the desktop system for installation of the new version of Ubuntu.
Creating A Package List
In order to get a grip on the extra software I have installed on my
desktop, I started out just writing a list of everything I saw in the
desktop menus that did not appear on my 10.04 test systems. This is all
the obvious stuff like Thunderbird, Gimp, Gthumb, etc., but what about
the stuff that's not on the menu? I know I have installed many command
line programs too. To get a complete list of the software installed on
the system, we'll have to employ some command line magic:
me@twin7$ dpkg --list | awk '$1 == "ii" {print $2}' >
~/package_list.old.txt
This creates a list of all of the installed packages on the system and
stores it in a file. We'll use this file to compare the package set
with that of the new OS installation.
Making A Backup
The most important task we need to accomplish before we install the new
OS is backing up the important data on the system for later restoration
after the upgrade. For me, the files I need to preserve are located
in /etc (the system's configuration files. I don't restore these, but
keep them for reference), /usr/local (locally installed software and
administration scripts), and /home (the files belonging to the users).
If you are running a web server on your system, you will also probably
need to backup portions of the /var directory as well.
There are many ways to perform backups. My systems normally backup
every night to a local file server on my network, but for this exercise
we'll use an external USB hard drive. We'll look at two popular
methods: rsync and tar.
The choice of method depends on your needs and on how your external
hard drive is formatted. The key feature afforded by both methods is
that they preserve the attributes (permissions, ownerships,
modification times, etc.) of the files being backed up. Another feature
they both offer is the ability to exclude files from the backup because
there are a few things that we don't want.
The rsync program copies files from one place to another. The source or
destination may be a network drive, but for our purposes we will use a
local (though external) volume. The great advantage of rsync is that
once an initial copy is performed, subsequent updates can be made very
rapidly as rsync only copies the changes made since the previous copy.
The disadvantage of rsync is that the destination volume has to have a
Unix-like file system since it relies on it to store the file
attributes.
Here we have a script that will perform the backup using rsync. It
assumes that we have an ext3 formatted file system on a volume named
BigDisk and that the volume has a backup directory:
#!/bin/bash
# usb_backup - Backup system to external disk drive using rsync
SOURCE="/etc /usr/local /home"
EXT3_DESTINATION=/media/BigDisk/backup
if [[ -d $EXT3_DESTINATION ]]; then
sudo rsync -av \
--delete \
--exclude '/home/*/.gvfs' \
$SOURCE $EXT3_DESTINATION
fi
The script first checks that the destination directory exists and then
performs rsync. The --delete option removes files on the destination
that do not exist on the source. This way a perfect mirror of the
source is maintained. We also exclude any .gvfs directories we
encounter. They cause problems. This script can be used as a routine
backup procedure. Once the initial backup is performed, later backups
will be very fast since rsync identifies and copies only files that
have changed between backups.
Our second approach uses the tar program. tar (short for tape archive)
is a traditional Unix tool used for backups. While its original use was
for writing files on magnetic tape, it can also write ordinary files.
tar works by recording all of the source files into a single archive
file called a tar file. Within the tar file all of the source file
attributes are recorded along with the file contents. Since tar does
not rely on the native file system of the backup device to store the
source file attributes, it can use any Linux-supported file system to
store the archive. This makes tar the logical choice if you are using
an off-the-shelf USB hard drive formatted as NTFS. However, tar has a
significant disadvantage compared to rsync. It is extremely cumbersome
to restore single files from an archive if the archive is large.
Since tar writes its archives as though it were writing to magnetic
tape, the archives are a sequential access medium. This means to find
something in the archive, tar must read through the entire archive
starting from the beginning to retrieve the information. This is
opposed to a direct access medium such as a hard disk where the system
can rapidly locate and retrieve a file directly. It's like the
difference between a DVD and a VHS tape. With a DVD you can immediately
jump to a scene whereas with a VHS tape you have to scan down the
entire length of the tape until you get to the desired spot.
Another disadvantage compared to rsync is that each time you perform a
backup, you have to copy every file again. This is not a problem for a
one time backup like the one we are performing here but would be very
time consuming if used as a routine procedure.
By the way, don't attempt a tar based backup on a VFAT (MS-DOS)
formatted drive. VFAT has a maximum file size limit of 4GB and unless
you have a very small set of home directories, you'll exceed the limit.
Here is our tar backup script:
#!/bin/bash
# usb_backup_ntfs - Backup system to external disk drive using tar
SOURCE="/etc /usr/local /home"
NTFS_DESTINATION=/media/BigDisk_NTFS/backup
if [[ -d $NTFS_DESTINATION ]]; then
for i in $SOURCE ; do
fn=${i//\/}
sudo tar -czv \
--exclude '/home/*/.gvfs' \
-f $NTFS_DESTINATION/$fn.tgz $i
done
fi
This script assumes a destination volume named BigDisk_NTFS containing
a directory named backup. While we have implied that the volume is
formatted as NTFS, this script will work on any Linux compatible file
system that allows large files. The script creates one tar file for
each of the source directories. It constructs the destination file
names by removing the slashes from the source directory names and
appending the extension ".tgz" to the end. Our invocation of tar
includes the z option which applies gzip compression to the files
contained within the archive. This slows things down a little, but
saves some space on the backup device.
Other Details To Check
Since one of the goals of our new installation is to utilize new
versions of our favorite apps starting with their native default
configurations, we won't be restoring many of the configuration files
from our existing system. This means that we need to manually record a
variety of configuration settings. This information is good to have
written down anyway. Record (or export to a file) the following:
- Email Configuration
- Bookmarks
- Address Books
- Passwords
- Names Of Firefox Extensions
- Others As Needed
Ready, Set, Go!
That about does it. Once our backups are made and our settings are
recorded, the next thing to do is insert the install CD and reboot.
I'll see you on the other side!
Further Reading
The following chapters in The Linux Command Line
- Chapter 16 - Storage Media (covers formatting external drives)
- Chapter 19 - Archiving And Backup (covers rsync, tar, gzip)Man pages:
- rsync
- tarAn article describing how to add NTFS support to Ubuntu 8.04
-
http://maketecheasier.com/how-to-reformat-an-external-hard-drive-to-ntfs-format-in-ubuntu-hardy/2008/09/29
--
Posted By William Shotts to LinuxCommand.org: Tips, News And Rants at
5/11/2010 06:33:00 PM |
|
From: William S. <bs...@pa...> - 2010-05-15 14:54:53
|
After some experiments and benchmarking, I have modified the
usb_backup_ntfs script presented in the last installment to remove
compression. This cuts the time needed to perform the backup using this
script by roughly half. The previous script works, but this one is
better:
#!/bin/bash
# usb_backup_ntfs # backup system to external disk drive
SOURCE="/etc /usr/local /home"
NTFS_DESTINATION=/media/BigDisk_NTFS/backup
if [[ -d $NTFS_DESTINATION ]]; then
for i in $SOURCE ; do
fn=${i//\/}
sudo tar -cv \
--exclude '/home/*/.gvfs' \
-f $NTFS_DESTINATION/$fn.tar $i
done
fi
--
Posted By William Shotts to LinuxCommand.org: Tips, News And Rants at
5/15/2010 10:54:00 AM |
|
From: William S. <bs...@pa...> - 2010-05-18 19:47:42
|
For our final installment, we're going to install and perform some
basic configuration on our new Ubuntu 10.04 system.
Downloading The Install Image And Burning A Disk
We covered the process of getting the CD image and creating the install
media in installment 3. The process is similar. You can download the CD
image here. Remember to verify the MD5SUM of the disk you burn. We
don't want to have a failed installation because of a bad disk. Also,
be sure to read the 10.04 release notes to avoid any last minute
surprises.
Last Minute Details
There may be a few files that we will want to transfer to the new
system immediately, such as the package_list.old.txt file we created in
installment 4 and each user's .bashrc file. Copy these files to a flash
drive (or use Ubuntu One, if you're feeling adventuresome).
Install!
We're finally ready for the big moment. Insert the install disk and
reboot. The install process is similar to previous Ubuntu releases.
Apply Updates
After the installation is finished and we have rebooted into our new
system, the first thing we should do is apply all the available
updates. When I installed last week, there were already 65 updates.
Assuming that we have a working Internet connection, we can apply the
updates with the following command:
me@linuxbox ~$ sudo apt-get update; sudo apt-get upgrade
Since the updates include a kernel update, reboot the system after the
updates are applied.
Install Additional Packages
The next step is to install any additional software we want on the
system. To help with this task, we created a list in installment 4 that
contained the names of all of the packages on the old system. We can
compare this list with the new system using the following script:
#!/bin/bash
# compare_packages - compare lists of packages
OLD_PACKAGES=~/package_list.old.txt
NEW_PACKAGES=~/package_list.new.txt
if [[ -r $OLD_PACKAGES ]]; then
dpkg --list | awk '$1 == "ii" {print $2}' > $NEW_PACKAGES
diff -y $OLD_PACKAGES $NEW_PACKAGES | awk '$2 == "<" {print $1}'
else
echo "compare_packages: $OLD_PACKAGES not found." >&2
exit 1
fi
This scripts produces a list of packages that were present on the old
system but not yet on the new system. You will probably want to capture
the output of this script and store it in a file:
me@linuxbox ~ $ compare_packages > missing_packages.txt
You should review the output and apply some editorial judgement as it
is likely the list will contain many packages that are no longer used
on the new system in addition to the packages that you do want to
install. As you review the list, you can use the following command to
get a description of a package:
apt-cache show package_name
Once you determine the final list of packages to be installed, you can
install each package using the command:
sudo apt-get install package_name
or, if you are feeling especially brave, you can create a text file
containing the list of desired packages to install and do them all at
once:
me@linuxbox ~ $ sudo xargs apt-get install < package_list.txt
Create User Accounts
If your old system had multiple user accounts, you will want to
recreate them before restoring the user home directories. You can
create accounts with this command:
sudo adduser user
This command will create the user and group accounts for the specified
user and create the user's home directory.
Restore The Backup
If you created your backup using the usb_backup script from installment
4 you can use this script to restore the /usr/local and /home
directories:
#!/bin/bash
# usb_restore - restore directories from backup drive with rsync
BACKUP_DIR=/media/BigDisk/backup
ADDL_DIRS=".ssh"
sudo rsync -a $BACKUP_DIR/usr/local /usr
for h in /home/* ; do
user=${h##*/}
for d in $BACKUP_DIR$h/*; do
if [[ -d $d ]]; then
if [[ $d != $BACKUP_DIR$h/Examples ]]; then
echo "Restoring $d to $h"
sudo rsync -a "$d" $h
fi
fi
done
for d in $ADDL_DIRS; do
d=$BACKUP_DIR$h/$d
if [[ -d $d ]]; then
echo "Restoring $d to $h"
sudo rsync -a "$d" $h
fi
done
# Uncomment the following line if you need to correct file ownerships
#sudo chown -R $user:$user $h
done
You should adjust the value of the ADDL_DIRS constant to include hidden
directories you want to restore, if any, as this script does not
restore any directory whose name begins with a period to prevent
restoration of configuration files and directories.
Another issue you will probably encounter is the ownership of user
files. Unless the user ids of each of the users on old system match the
user ids of the users on the new system, rsync will restore them with
the user ids of the old system. To overcome this, uncomment the chown
line near the end of the script.
If you made your backup using the usb_backup_ntfs script, use this
script to restore the /usr/local and /home directories:
#!/bin/bash
# usb_restore_ntfs - restore directories from backup drive with tar
BACKUP_DIR=/media/BigDisk_NTFS/backup
cd /
sudo tar -xvf $BACKUP_DIR/usrlocal.tar
for h in /home/* ; do
user=${h##*/}
sudo tar -xv \
--seek \
--wildcards \
--exclude="home/$user/Examples" \
-f $BACKUP_DIR/home.tar \
"home/$user/[[:alnum:]]*" \
"home/$user/.ssh"
done
To append additional directories to the list to be restored, add more
lines to the tar command using the "home/$user/.ssh" line as a
template. Since tar restores user files using user names rather than
user ids as rsync does, the ownership of the restored files is not a
problem.
Enjoy!
Once the home directories are restored, each user should reconfigure
their desktop and applications to their personal taste. Other than
that, the system should be pretty much ready-to-go. Both of the backup
methods provide the /etc directory from the old system for reference in
case it's needed.
Further Reading
Man pages for the following commands:
- apt-cache
- apt-get
- adduser
- xargs
--
Posted By William Shotts to LinuxCommand.org: Tips, News And Rants at
5/18/2010 03:47:00 PM |