Sybase ASE: Dump load failed

We’ve had the following issue: After a problem on a customer database, we needed to restore a single table. A compressed dump was available containing the latest version of this one table but an older version of the other tables. So the plan was to load the dump on a test system with the same device layout, ASE version and operating system, export the table and import it on the live system.

Unfortunately while loading the dump, we got the following error:

Oct  7 10:57:00 2013: Backup Server: 2.23.1.1: Connection from Server SYBASE on Host myhost with HostProcid 7705.
Oct  7 10:57:00 2013: Backup Server: 4.132.1.1: Attempting to open byte stream device: 'compress::9::/dump1_1/mydb.201310030015.000::00'
Oct  7 10:57:00 2013: Backup Server: 6.28.1.1: Dumpfile name 'mydb13276003C2   ' section number 1 mounted on byte stream 'compress::9::/dump1_1/mydb.201310030015.000::00'
Oct  7 10:57:23 2013: Backup Server: 4.188.1.1: Database mydb: 2310872 kilobytes (1%) LOADED.
Oct  7 10:57:32 2013: Backup Server: 4.124.2.1: Archive API error for device='compress::9::/dump1_1/mydb.201310030015.000::00': Vendor application name=Compress API, Library version=1, API routine=syb_read(), Message=syb_read: gzread() error=-1 msg=1075401822
Oct  7 10:57:32 2013: Backup Server: 6.32.2.3: compress::9::/dump1_1/mydb.201310030015.000::00: volume not valid or not requested (server: n byte stream 'cu@ess::9::/dump1_1/mydb.20¤D, session id: 17.)
Oct  7 10:57:32 2013: Backup Server: 1.14.2.4: Unrecoverable I/O or volume error.  This DUMP or LOAD session must exit.

So it looks like there was a problem uncompressing the dump. I am not too sure where the strange characters in the second to last line come from but I’m not sure either that it’s related to the problem.

Reading the header from the dump as described in a previous post worked fine. So the dump was not completely corrupt. It’s also the reason why the first percent of the dump could be loaded.

We also tried loading the dump using the “with listonly” option but it failed:

1> load database mydb from "compress::9::/dump1_1/mydb.201310030015.000" with listonly=full
2> go
Backup Server session id is:  26.  Use this value when executing the 'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 4.22.1.1: Option LISTONLY is not valid for device 'compress::9::/dump1_1/mydb.201310030015.000::00'.

I never found out why it wasn’t possible to use listonly on this dump file but I didn’t really have time to look into it in details…

The I saw that there was a with verify only option. Here from the Sybase documentation:

load database database_name
from [compression=]stripe_device
[at backup_server_name ]
[density = density_value,
blocksize = number_bytes,
dumpvolume = volume_name,
file = file_name]
with verify only [ = header | full]

But it failed saying there was an error near “only”… Then I wondered why the syntax would be “with headeronly” and “with listonly” but “with verify only” i.e. with an extra space. So we tried without the space and it worked. Well, kind of… It could still load the header but failed with the same error message while reading the rest.

Next I thought it might have been a problem while transferring the dump through FTP (I wasn’t sure whether the transfer was done in binary or ASCII mode). One way to check it is to search for \r\n characters in the dump. It can be done using the od command. od dumps files in octal and other formats. You can use the -c option to show special characters as escape characters (i.e. \r and \n). So you need to run od and pipe it to a grep e.g.:

od -c mydb.201310060030.000 | grep "\\r  \\n"

If this returns something then you probably used the wrong transfer mode using FTP. It wasn’t the case here. Also both the files on-site and the one we got had the same number of bytes which also indicates that it’s not such an issue.

Another thing you need to check is whether uncompressing failed because of memory or disk space issues. In our case we had plenty of free disk space and RAM available.

Another thing I found while googling for a solution was the following in a newsgroup:

Backup Server will use asynchronous I/O by default and there was a CR 335852 to work around this behavior. Try starting backupserver using trace flag -D32 .

CR Description :-
6.21 Dumping or loading databases with asynchronous I/O
[CR #335852] On an IA32 running Red Hat, running a dump or load database command can cause Backup Server to stop responding when using asynchronous I/O. Backup Server uses asynchronous I/O by default.

[ Workaround : Start Backup Server using trace flag -D32 to force a synchronous I/O.

So we tried adding the flag to the start script of the backup server. But it didn’t help. Anyway we didn’t know whether the problem was during loading or whether there had been a problem while dumping.

The next thing which came up to my mind was to try and uncompress the dump file manually to see whether it’s corrupt. This can be done with gunzip. You just need to rename the file in case it doesn’t have a valid gzip extension e.g.:

mv mydb.201310060030.000.gz
gunzip mydb.201310060030.000.gz

In our case it failed. So we repeated it on a dump file we knew was fine and it worked. So we had the source of the problem. The dump stripe was corrupt.
Repeating it on the dump on site worked. So the stripe was not corrupt after the dump but was somehow corrupted in the transfer. So all we had to do was to transfer it again.

I’m not too sure why the stripe got corrupted during the transfer but was happy it didn’t get corrupted while dumping as we had feared in the beginning.

2 thoughts on “Sybase ASE: Dump load failed

    1. Yes, that’s what I would have done if I had suspected a corruption while transferring in the first place. And in the beginning, I didn’t have access to the customer site. Just got the dump from a guy who happened to come over for a training and had made a copy on a USB stick…
      But now, we’ve experienced it, we should consider storing an MD5 checksum with the dump…

Leave a Reply

Your email address will not be published. Required fields are marked *