Discussion:
Attempting to mount disk images
(too old to reply)
Tim
2012-01-17 17:12:02 UTC
Permalink
Hello,

I have disk images from a Solaris x86 system (I believe version 10)
which I would like to mount under linux. The images were obtained
using dd. I currently have version 0.7.0 of zfs-fuse installed on a
Debian virtual machine (via the Sid package). My virtual machine has
been provided access to the dd images through VMWare. Therefore, the
disk images appear as /dev/sd* block devices.

I am unable to get the devices recognized. I have tried:
zpool import
zpool import -d /dev
zpool import -f -d /dev/block
zpool import -f -d /dev/disk/by-path

As well as other variations. I am unable to get it to recognize
anything. Is there an easy way to verify that these are indeed ZFS
images? Assuming they are, is it possible that an unsupported pool
version is in use? How would I check for this?

Thanks much,
tim
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Emmanuel Anne
2012-01-17 17:15:45 UTC
Permalink
If the images come from a whole disk, it might be tricky.
If it's a partition image, then now even parted can tell you if the
partition is zfs or not.
Post by Tim
Hello,
I have disk images from a Solaris x86 system (I believe version 10)
which I would like to mount under linux. The images were obtained
using dd. I currently have version 0.7.0 of zfs-fuse installed on a
Debian virtual machine (via the Sid package). My virtual machine has
been provided access to the dd images through VMWare. Therefore, the
disk images appear as /dev/sd* block devices.
zpool import
zpool import -d /dev
zpool import -f -d /dev/block
zpool import -f -d /dev/disk/by-path
As well as other variations. I am unable to get it to recognize
anything. Is there an easy way to verify that these are indeed ZFS
images? Assuming they are, is it possible that an unsupported pool
version is in use? How would I check for this?
Thanks much,
tim
--
To visit our Web site, click on http://zfs-fuse.net/
--
my zfs-fuse git repository :
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Tim
2012-01-17 17:25:16 UTC
Permalink
Thanks for the quick reply.
Post by Emmanuel Anne
If the images come from a whole disk, it might be tricky.
Understood. I can certainly set up loopback devices with the
appropriate offsets if necessary.
Post by Emmanuel Anne
If it's a partition image, then now even parted can tell you if the
partition is zfs or not.
Ok, this is what I just tried:

# parted -l

Error: /dev/sdb: unrecognised disk label

Error: /dev/sdc: unrecognised disk label

Error: /dev/sdd: unrecognised disk label

Error: /dev/sde: unrecognised disk label

Error: /dev/sdf: unrecognised disk label

Error: /dev/sdg: unrecognised disk label

# parted -v
parted (GNU parted) 2.3
...

Some of those do look like UFS, but the rest are supposedly ZFS.
Using sleuthkit's mmls tool on /dev/sdc, I get:

# mmls /dev/sdc
Sun Volume Table of Contents (Solaris)
Offset Sector: 0
Units are in 512-byte sectors

Slot Start End Length Description
00: 00 0000000000 0031374944 0031374945 / (0x02)
01: ----- 0031374945 0031407074 0000032130 Unallocated


Not really sure what to make of it. I believe these systems may have
been running under LDOMs...

tim
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Emmanuel Anne
2012-01-17 21:09:34 UTC
Permalink
You need a solaris specialist here, I am not one... !
Post by Tim
Thanks for the quick reply.
Post by Emmanuel Anne
If the images come from a whole disk, it might be tricky.
Understood. I can certainly set up loopback devices with the
appropriate offsets if necessary.
Post by Emmanuel Anne
If it's a partition image, then now even parted can tell you if the
partition is zfs or not.
# parted -l
Error: /dev/sdb: unrecognised disk label
Error: /dev/sdc: unrecognised disk label
Error: /dev/sdd: unrecognised disk label
Error: /dev/sde: unrecognised disk label
Error: /dev/sdf: unrecognised disk label
Error: /dev/sdg: unrecognised disk label
# parted -v
parted (GNU parted) 2.3
...
Some of those do look like UFS, but the rest are supposedly ZFS.
# mmls /dev/sdc
Sun Volume Table of Contents (Solaris)
Offset Sector: 0
Units are in 512-byte sectors
Slot Start End Length Description
00: 00 0000000000 0031374944 0031374945 / (0x02)
01: ----- 0031374945 0031407074 0000032130 Unallocated
Not really sure what to make of it. I believe these systems may have
been running under LDOMs...
tim
--
To visit our Web site, click on http://zfs-fuse.net/
--
my zfs-fuse git repository :
http://rainemu.swishparty.co.uk/cgi-bin/gitweb.cgi?p=zfs;a=summary
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Seth Heeren
2012-01-17 22:09:38 UTC
Permalink
Hi Tim,

I also am no Solaris export, although I just migrated my own Solaris
based v23 pools to zfs-on-linux; I think I managed because I took care
to create simple 'whole-disk-sized' MSDOS partitions on the raw
devices before entering those partitions into their zfs pools (so
that's ... cheating a bit).

Since, however, you appear to have spare disks enough to have a dd
image, this is my advice: just create a new pool using MSDOS primary
partitions (presumably from Linux, using either zfs-fuse or ZoL) and
just

zfs send -R ***@now | mbuffer -m180M | zfs receive -nvFud newpool

to copy it smartly. The good thin about that is that it will even
allow you to change pool layout and various other options. _do_ note,
though:

- that send -R replicates common fs properties (including
compression/casesensitivity/sharenfs/mountpoint etc. which is why I
included -u (unmounted) on zfs receive)
- that you need to choose the pool version of the destination pool
wisely (if it doesn't support the fs versions being received, it will
receive the raw dataset blocks just fine, but they won't be mountable;
if you make the pool version too high you can't import the pool in
older Solaris/BSD versions)
- cater the size of the mbuffer to your system memory. 25% of
available RAM up to 1G has worked well for me to dampen any network
latency in combination with disk bandwidths
- if network bandwith is crucial, use -D on send and perhaps add
compression in the pipe. Using -D will dedup the _stream_, but has no
effect on the dedup state of source/target pools

HTH
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Tim
2012-01-17 23:38:26 UTC
Permalink
Hey Seth,

Thanks for the suggestions, I'll keep them in mind.
Post by Seth Heeren
I also am no Solaris export, although I just migrated my own Solaris
based v23 pools to zfs-on-linux; I think I managed because I took care
to create simple 'whole-disk-sized' MSDOS partitions on the raw
devices before entering those partitions into their zfs pools (so
that's ... cheating a bit).
Since, however, you appear to have spare disks enough to have a dd
image, this is my advice: just create a new pool using MSDOS primary
partitions (presumably from Linux, using either zfs-fuse or ZoL) and
just
to copy it smartly. The good thin about that is that it will even
allow you to change pool layout and various other options. _do_ note,
- that send -R replicates common fs properties (including
compression/casesensitivity/sharenfs/mountpoint etc. which is why I
included -u (unmounted) on zfs receive)
- that you need to choose the pool version of the destination pool
wisely (if it doesn't support the fs versions being received, it will
receive the raw dataset blocks just fine, but they won't be mountable;
if you make the pool version too high you can't import the pool in
older Solaris/BSD versions)
- cater the size of the mbuffer to your system memory. 25% of
available RAM up to 1G has worked well for me to dampen any network
latency in combination with disk bandwidths
- if network bandwith is crucial, use -D on send and perhaps add
compression in the pipe. Using -D will dedup the _stream_, but has no
effect on the dedup state of source/target pools
Unfortunately I'm not in a position to be able to reimage the disks or
to use zfs/zpool tools from the original host. I basically have to
deal with the images I was given, as getting new images is a
logistical problem.

I'm going to try to search for some partition/filesystem headers.
Failing a simple loopback offset fix, I'll probably have to set up
opensolaris and hope I can import there.

thanks,
tim
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
sgheeren
2012-01-17 23:42:44 UTC
Permalink
Post by Tim
I'm going to try to search for some partition/filesystem headers.
Failing a simple loopback offset fix, I'll probably have to set up
opensolaris and hope I can import there.
Ok, good luck (I saw your crosspost from the ZoL list too, thx)

Seth
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Darik Horn
2012-01-17 22:48:22 UTC
Permalink
Post by Tim
# parted -l
Error: /dev/sdb: unrecognised disk label
Solaris creates poorly formed GPT labels that are not recognized by most
Linux utilities, which can cause this kind of error when a whole-disk pool
is moved from Solaris to zfs-fuse or zfs-linux. Technical details are here:

https://github.com/zfsonlinux/zfs/issues/344

You can manually rewrite the GPT label according to the ticket, or you can
take each vdev offline, clear it, and do an in-place replace.
--
Darik Horn <dajhorn-m/***@public.gmane.org>
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Tim
2012-01-17 23:34:02 UTC
Permalink
Hi Darik,
Post by Darik Horn
Solaris creates poorly formed GPT labels that are not recognized by most
Linux utilities, which can cause this kind of error when a whole-disk pool
https://github.com/zfsonlinux/zfs/issues/344
You can manually rewrite the GPT label according to the ticket, or you can
take each vdev offline, clear it, and do an in-place replace.
Thanks for the helpful info. Unfortunately, I just tried running
gpart and it doesn't give the same kind of errors:

# gdisk /dev/sdc
GPT fdisk (gdisk) version 0.8.1

Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present


It is certainly possible that these disk images were obtained in an
unusual way. I'm honestly not sure if they were obtained from
partitions, volumes, or perhaps even at some layer below a
RAID/mirror. I do know that file contents that I want are in the
images though, based on a little browsing with a hex editor.

Unfortunately, performing another image is not really an option right
now, so I need to figure out how to get them mounted. I definitely
can modify a copy of the disks as needed to get them recognized, if I
can just figure out what needs to be done. I only need read access to
the data.

I'm going to research GPTs and see if I can search for a signature of
these blocks that may occur in an unexpected location in the image.

Thanks again,
tim
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Fajar A. Nugraha
2012-01-18 00:29:18 UTC
Permalink
Post by Tim
It is certainly possible that these disk images were obtained in an
unusual way.
How?
Post by Tim
 I'm honestly not sure if they were obtained from
partitions, volumes, or perhaps even at some layer below a
RAID/mirror.  I do know that file contents that I want are in the
images though, based on a little browsing with a hex editor.
So you didn't dump them yourself?
Post by Tim
Unfortunately, performing another image is not really an option right
now, so I need to figure out how to get them mounted.  I definitely
can modify a copy of the disks as needed to get them recognized, if I
can just figure out what needs to be done.  I only need read access to
the data.
If you dump either:
- the disk
- the partition containing solaris slices
- the slice containing zfs

you should be able to get it recognized by linux. If you need a "fake"
MBR (plus fdisk partition table), virtualbox should be able to help
you.

If the disk is previously managed by a hardware raid card (even if
it's just encapsulated as a single drive raid0), then it won't be
easy. You need to find out how that card labels the disk/reserves some
space, and adjust it accordingly.

In any case you need to know how you dumped the disk.
--
Fajar
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Darik Horn
2012-01-18 01:14:02 UTC
Permalink
Post by Tim
It is certainly possible that these disk images were obtained in an
unusual way. I'm honestly not sure if they were obtained from
partitions, volumes, or perhaps even at some layer below a
RAID/mirror. I do know that file contents that I want are in the
images though, based on a little browsing with a hex editor.
Do this:

# file MyDiskImage1

If it says something like:

MyDiskImage1:
x86 boot sector; partition 1: ID=0xee, starthead 255, startsector 1,
3907029167 sectors, extended partition table (last)\011, code offset 0x0

Then you can import the image as a raw disk in something like VMware or
Virtual Box, but you must do it at the command line. Check the manual of
your VM product for instructions. The disk must be larger than 512 times
the sector count. This example is a 2TB disk.

The `file` command is more likely to say:

MyDiskImage1: data

Now do this:

# dd if=MyDiskImage1 bs=128k count=1 | strings

If you see something like this that looks like a zpool configuration field:

version
name
tank
state
pool_guid
hostname
toaster
top_guid
guid
vdev_children
vdev_tree
type
raidz
guid
nparity
metaslab_array
metaslab_shift
ashift
asize
is_log
children
(... plus disk information)

then you need to `dd` the raw image of the ZFS vdev into a virtual disk
image that has a partition table.

# dd if=MyDiskImage1.raw of=/dev/sd1 bs=1M

If you do not get a `strings` output that looks like the given example,
then you cannot reimport. Unpacking and cooking an image from a hardware
RAID implementation is intensive recovery work.
--
Darik Horn <dajhorn-m/***@public.gmane.org>
--
To post to this group, send email to zfs-fuse-/***@public.gmane.org
To visit our Web site, click on http://zfs-fuse.net/
Loading...