Sybase: Reset the identity counter on a table

When working with an identity column, a counter per table is maintained. This counter is used to generate new values for the identity column of the table.
When you load many data into a table as a test and then truncate the table, the identity counter is not reset. So if you inserted 1000000 rows, truncated the table and inserted an entry, you’d then the value 1000001 for the new entry.

If you want to reset the identity counter for a table in order to have the next value of the identity column be 1 instead, you can change the identity_burn_max attribute for the table e.g.:

sp_chgattribute my_table, 'identity_burn_max', 0, '0'

Please note that this command creates an exclusive table lock on the table. The lock is not kept for long but this means that the command will be blocked by any read or write lock on the table.

The last parameter is the identity counter you want to set. So if you have kept the entries with the identity values 1 to 10 and deleted the rest, you’d have to set the identiy_burn_max attribute to 10:

sp_chgattribute my_table, 'identity_burn_max', 0, '10'

If you try to set it to a lower value (i.e. a value lower than the maximum value already in use in the table), sp_chgattribute will fail, refusing to update the attribute because you then risk having duplicate values in there.

You can work around it by directly setting the attribute using dbcc:

dbcc set_identity_burn_max(my_database, my_table, 'my_new_value')

Also note that if what you want is to actually create an identity gap, all you have to do to increase the counter is to allow identity inserts on the table, insert a higher value and delete it:

SET IDENTITY_INSERT my_table ON
go
INSERT INTO my_table(my_identity_column,...) VALUES (1785,...)
go
SET IDENTITY_INSERT my_table OFF
go
DELETE FROM my_table WHERE my_identity_column=1785
go

Sybase: transaction log dump and non-logged operations

Most of you probably already know this but just to make sure… There are three kinds of operations from a transaction log perspective:

  1. logged operations: those are operations which are completely logged in the transaction log.
  2. minimally logged or non-logged operations which are logged in the transaction log but not every change performed by this operation is logged. They do not prevent a dump and restore of the transaction logged
  3. minimally logged or non-logged operations which are not logged in the transaction log at all.

Here are a few example of each type of operations:

  1. insert into, update, delete
  2. truncate table
  3. select into, fast bcp, parallel sort

When an operation of the third category is performed, since the transaction log entries are missing, a dump and restore of the transaction log only is not possible anymore. This means ASE is not able to recover the database in a disaster recovery scenario unless you take a full dump of the database. Since the dumped transaction log does not contain the required information, ASE prevents you from dumping the transaction log once one of these operations has been performed because you couldn’t use the dumped log to recover the database anyway. Many people tend to think that truncate table also prevents a transaction log dump, which is not the true. Truncate table does not log every deletion in the table and is thus not a fully logged operation but it does log all page deallocations in the transaction log so that it’s still possible to reconstruct the database. So if you rely on a transaction log dump to recover the database or if you use it to migrated data from a system to the other one, it is important to:

  • prevent such operations to happen
  • check whether such operations have been performed

prevent such operations to happen

This can be done by setting an option on the database:

master..sp_dboption mydb, "select into/bulkcopy/pllsort", false
go
use mydb
go
checkpoint
go

Replace mydb by the name of the database you want to prevent such operations on. With this option set select into, fast bcp and parallel sort operations will not be allowed anymore on this database.

check whether such operations have been performed

You can use the following query if such operations are not prevented as shown above.

select tran_dumpable_status('mydb')

If it returns 0, then everything is fine and a dump of the transaction log should work fine. Otherwise, such an operation was performed and it is not possible to dump the transaction log until a full dump of the database is performed.

If the returned value is not 0, you can find out exactly what happens by interpreting the return bit mask:

  • 1 – the specified database name was not found.
  • 2 – there is no separate log device for this database.
  • 4 – the log first page is on a data-only disk fragment.
  • 8 – the trunc log on chkpt option is set for the database.
  • 16 – an operation preventing a dump of the transaction log was performed.
  • 32 – a transaction log dump with truncate_only was performed.
  • 64 – it’s a new database and a full database dump must be performed before you can dump the transaction log.

Here are a few explanations on the different values:

  • 1 – this mostly means that you’ve misspelled the database name.
  • 2 or 4 – the transaction log needs to be entirely on a separate transaction log device for the dump of the transaction log to work.
  • 8 – the database is configured to truncate the transaction log on checkpoint. This means that the transaction log is mostly empty except for things which happened since the last checkpoint. A dump of the transaction log then makes no sense, since most of the changes performed have already been discarded. see this post for more information.
  • 16 – a select into, fast bcp or parallel sort has been performed and the transaction log would be useless anyway.
  • 32 – the transaction log has been truncated by this command and the chain has been broken.
  • 64 – the transaction log contains the changes applied since the last full dump so you need at least one full dump in order to use it.

If you’re still working with a pre-15 (e.g. 12.5) version of ASE, the tran_dumpable_status will not be available as it was introduced in ASE 15. Instead, you’ll have to read from the dbinfo structure. This can be done using:

dbcc traceon (3604)
go
dbcc dbinfo (my_db_name)
go

Replacing my_db_name by the name of the database you want to check. And then look for offset 128=dbi_ldstate. If the value is 4, 8 or 12, it means you cannot dump the transaction log for this database. Here’s what the different values mean:

  • 4 (DBI_NONLOGGED_WRITES) means that a non-logged write was performed
  • 8 (DBI_TRUNCATED_ONLY) means that the log was dumped with truncate_only
  • 12 means both

In all three cases, a normal dump of the transaction log will not be possible.

The dbinfo structure is also accessible through the keys1 column of sysindex for syslog (id = 8). So you can alternatively use the following statement:

SELECT CONVERT(BINARY,SUBSTRING(keys1,129,1)) & 12 FROM my_db_name..sysindexes WHERE id = 8

Replacing my_db_name by the name of the database you want to check. This will also return 0, 4, 8 or 12. It is possible to perform a normal dump of the transaction log only if it returns 0.

So if you do not need any of the operations which prevent the transaction log from being dumped, the best solution is to prevent them on the database level. Otherwise, when you need to perform a transaction log dump (e.g. because the transaction is full or because you need to migrate the changes to a replicated database), you should first check whether a transaction log dump would be possible and trigger a full dump instead if not.

Sybase ASE: Dump load failed

We’ve had the following issue: After a problem on a customer database, we needed to restore a single table. A compressed dump was available containing the latest version of this one table but an older version of the other tables. So the plan was to load the dump on a test system with the same device layout, ASE version and operating system, export the table and import it on the live system.

Unfortunately while loading the dump, we got the following error:

Oct  7 10:57:00 2013: Backup Server: 2.23.1.1: Connection from Server SYBASE on Host myhost with HostProcid 7705.
Oct  7 10:57:00 2013: Backup Server: 4.132.1.1: Attempting to open byte stream device: 'compress::9::/dump1_1/mydb.201310030015.000::00'
Oct  7 10:57:00 2013: Backup Server: 6.28.1.1: Dumpfile name 'mydb13276003C2   ' section number 1 mounted on byte stream 'compress::9::/dump1_1/mydb.201310030015.000::00'
Oct  7 10:57:23 2013: Backup Server: 4.188.1.1: Database mydb: 2310872 kilobytes (1%) LOADED.
Oct  7 10:57:32 2013: Backup Server: 4.124.2.1: Archive API error for device='compress::9::/dump1_1/mydb.201310030015.000::00': Vendor application name=Compress API, Library version=1, API routine=syb_read(), Message=syb_read: gzread() error=-1 msg=1075401822
Oct  7 10:57:32 2013: Backup Server: 6.32.2.3: compress::9::/dump1_1/mydb.201310030015.000::00: volume not valid or not requested (server: n byte stream 'cu@ess::9::/dump1_1/mydb.20¤D, session id: 17.)
Oct  7 10:57:32 2013: Backup Server: 1.14.2.4: Unrecoverable I/O or volume error.  This DUMP or LOAD session must exit.

So it looks like there was a problem uncompressing the dump. I am not too sure where the strange characters in the second to last line come from but I’m not sure either that it’s related to the problem.

Reading the header from the dump as described in a previous post worked fine. So the dump was not completely corrupt. It’s also the reason why the first percent of the dump could be loaded.

We also tried loading the dump using the “with listonly” option but it failed:

1> load database mydb from "compress::9::/dump1_1/mydb.201310030015.000" with listonly=full
2> go
Backup Server session id is:  26.  Use this value when executing the 'sp_volchanged' system stored procedure after fulfilling any volume change
request from the Backup Server.
Backup Server: 4.22.1.1: Option LISTONLY is not valid for device 'compress::9::/dump1_1/mydb.201310030015.000::00'.

I never found out why it wasn’t possible to use listonly on this dump file but I didn’t really have time to look into it in details…

The I saw that there was a with verify only option. Here from the Sybase documentation:

load database database_name
from [compression=]stripe_device
[at backup_server_name ]
[density = density_value,
blocksize = number_bytes,
dumpvolume = volume_name,
file = file_name]
with verify only [ = header | full]

But it failed saying there was an error near “only”… Then I wondered why the syntax would be “with headeronly” and “with listonly” but “with verify only” i.e. with an extra space. So we tried without the space and it worked. Well, kind of… It could still load the header but failed with the same error message while reading the rest.

Next I thought it might have been a problem while transferring the dump through FTP (I wasn’t sure whether the transfer was done in binary or ASCII mode). One way to check it is to search for \r\n characters in the dump. It can be done using the od command. od dumps files in octal and other formats. You can use the -c option to show special characters as escape characters (i.e. \r and \n). So you need to run od and pipe it to a grep e.g.:

od -c mydb.201310060030.000 | grep "\\r  \\n"

If this returns something then you probably used the wrong transfer mode using FTP. It wasn’t the case here. Also both the files on-site and the one we got had the same number of bytes which also indicates that it’s not such an issue.

Another thing you need to check is whether uncompressing failed because of memory or disk space issues. In our case we had plenty of free disk space and RAM available.

Another thing I found while googling for a solution was the following in a newsgroup:

Backup Server will use asynchronous I/O by default and there was a CR 335852 to work around this behavior. Try starting backupserver using trace flag -D32 .

CR Description :-
6.21 Dumping or loading databases with asynchronous I/O
[CR #335852] On an IA32 running Red Hat, running a dump or load database command can cause Backup Server to stop responding when using asynchronous I/O. Backup Server uses asynchronous I/O by default.

[ Workaround : Start Backup Server using trace flag -D32 to force a synchronous I/O.

So we tried adding the flag to the start script of the backup server. But it didn’t help. Anyway we didn’t know whether the problem was during loading or whether there had been a problem while dumping.

The next thing which came up to my mind was to try and uncompress the dump file manually to see whether it’s corrupt. This can be done with gunzip. You just need to rename the file in case it doesn’t have a valid gzip extension e.g.:

mv mydb.201310060030.000.gz
gunzip mydb.201310060030.000.gz

In our case it failed. So we repeated it on a dump file we knew was fine and it worked. So we had the source of the problem. The dump stripe was corrupt.
Repeating it on the dump on site worked. So the stripe was not corrupt after the dump but was somehow corrupted in the transfer. So all we had to do was to transfer it again.

I’m not too sure why the stripe got corrupted during the transfer but was happy it didn’t get corrupted while dumping as we had feared in the beginning.

PHP: Cannot connect to MySQL on localhost

I decided some time ago to move on my Mac from MAMP to a native installation of Apache, MySQL and PHP. After installing everything I startet my application and got an error saying the database connection failed. I checked that MySQL was running and that I could connect to it manually. Everything was fine. Running under MAMP with the exact same configuration everything worked. The final deployment server is a Debian machine and there it worked with the exact same settings too. It also worked on a Windows machine using XAMPP.

It was getting late and I just couldn’t understand what the problem was, I read each character of the configured username, password and database name like 100 times to make sure I had everything right. Then I just changed the host from localhost to 127.0.0.1 and didn’t expect anything to change but there it was, my application could connect to the database !

On that evening I just went to bed making a mental note I had to understand what was the difference. The next day I did some research and could figure out what was the problem:

Just looking at the mysql_connect page in the PHP online manual brought the answer. There is a note saying:

Whenever you specify “localhost” or “localhost:port” as server, the MySQL client library will override this and try to connect to a local socket (named pipe on Windows). If you want to use TCP/IP, use “127.0.0.1” instead of “localhost”.

Of course, when I searched for an answer online before I found the solution I never saw this manual page. But now it’s clear.

It’s rather confusing since it effectively means that MySQL has redefined localhost to mean “connect to a unix domain socket”. And when it uses unix domain socket, it will ignore whatever you define as a port. This is of course kind of an issue if you want to have multiple instances of MySQL running.

It also looks like the default behavior on Windows is to use TCP-IP. But on Unix-like operating systems, it depends on whether you use localhost or 127.0.0.1.

If you need to use localhost and cannot configure 127.0.0.1, you’ll have to use socat to establish a relay between a unix domain socket and the MySQL tcp port.

One of the reason why it works with some localhost on a machine and not on the other might also be that the path to the unix domain socket is not the one you expect. Usually the path would be /tmp/mysql.sock. But if your mysql instance uses a different one, you should adapt the mysql.default_socket setting in php.ini and point it to the right path (e.g. /opt/local/var/run/mysql5/mysqld.sock, /var/mysql/mysql.sock, /private/tmp/mysql.sock or /usr/local/mysql/run/mysql_socket). If you’re using PDO, the setting you need to change is probably pdo_mysql.default_socket. You should be able to find the right path using the following:

mysqladmin variables | grep socket

or this:

mysqld --verbose --help | grep "^socket"

or this:

mysql_config.pl --socket

You can read the location where PHP looks for the MySQL socket in php.ini (mysql.default_socket, mysqli.default_socket and pdo_mysql.default_socket). If you have no php.ini file yet copy php.ini.default or rename it:

sudo cp /etc/php.ini.default /etc/php.ini

You can then change the path there. After changing the path, you need to restart the Apache web server e.g.:

sudo apachectl restart

If you do not want to change php.ini, you can also create a link:

mkdir /var/mysql
ln -s /tmp/mysql.sock /var/mysql/mysql.sock

You might have to use sudo to have the permissions to perform the above actions. Also make sure that the permission of the /var/mysql directory are appropriate.

Note that if you see the socket at the right location but disappearing for unknown reasons, you my want to trying deleting the /etc/my.cnf global options file. I saw this somewhere during my search on the internet but do not quite remember where.

On the other hand instead of changing the path in php.ini you may want to change the path in the MySQL configuration (my.cnf):

[mysqld]
socket=/var/mysql/mysql.sock

And restart the MySQL server.

Note that using the following hostname for the connection should also work although I haven’t tried it myself:

localhost:/tmp/mysql.sock

(change /tmp/mysql.sock to the path to the actual MySQL socket).

If you are on Mac OS X, please read the General Notes on Installing MySQL on Mac OS X. It states that the default location for the MySQL Unix socket is different on Mac OS X and Mac OS X Server depending on the installation type you chose and provides a table with a list of location depending on the installation type.

In MAMP, the location should be /Applications/MAMP/tmp/mysql/mysql.sock. So if you are using an external PHP installation with MySQL from MAMP, you’ll probably need to run the following:

sudo ln -s /Applications/MAMP/tmp/mysql/mysql.sock /tmp/mysql.sock

MySQL: Access denied for user ‘root’@’localhost’ to database ‘information_schema’

When trying to import a MySQL database, it might stop with following error and fail to load all the dumped database:

Access denied for user 'root'@'localhost' to database 'information_schema'

INFORMATION_SCHEMA is a virtual database where information about all the other databases that the MySQL server maintains are stored. The tables in this schema are actually views not real tables and there is no database directory with that name.

Since it only contains data about the other databases on the server, you shouldn’t actually be importing it. So you shouldn’t have been dumping it in the first place. If you used mysqldump to dump all the databases, you are either using an old version (like 4.1) and should switch to a more current version of mysqldump. Or you use MySQL 5.5 or newer and actually went out of your way to dump INFORMATION_SCHEMA by naming it explicitly using the –skip-lock-tables option.

Anyway, there three solutions for this problem:

  1. Do not dump INFORMATION_SCHEMA.
  2. Remove it from the dump before importing it.
  3. Use the –force option when importing it.

Not dumping INFORMATION_SCHEMA if you use a version of mysqldump which dumps it if you use the -A or –all-databases option, means you’ll need to use the -B or –databases option and provide a list of databases to be dumped.

Removing it from the dump just involves using a text editor and deleting the parts about INFORMATION_SCHEMA since the dump is just a text file containing a list of SQL commands.

–force tells the MySQL client to continue even if an SQL error occurs. So it will produce errors while importing INFORMATION_SCHEMA but will then import the databases dumped after it.

Sybase: Convert seconds to a human readable duration

Let’s say you get a number of seconds and want to convert it to a string so that it’s human readable e.g. saying that something took 15157 seconds is probably not a good idea and it’d make more sense to say it took 4 hours 12 minutes and 37 seconds.

Here’s a very simple stored procedure to perform this conversion:

create procedure hours_min_sec(@seconds int)
as
begin
select right('00'+convert(varchar(2), floor(@seconds / 3600)), 2)+':'+right('00'+convert(varchar(2),floor(@seconds / 60) % 60), 2)+':'+right('00'+convert(varchar(2),@seconds % 60), 2)
end
go

Note that the you may see a line wrap here but actually everything after begin and before end should be on one line.

You can use it like this:

1> hours_min_sec 15157
2> go

 --------
 04:12:37

(1 row affected)
(return status = 0)

Sybase: Updating statistics

In ASE, statistics are used by the Optimiser to determine the appropriate Query Plan. Wrong or out-of-date statistics can cause the wrong query plan to be selected which can lead to massive performance issues. It is thus very important to consider statistics in regular database maintenance activities and make sure they are updated on a regular basis.

There are a few ways in which you can make sure that the statistics are up-to-date and support the query optimizer in choosing the optimal query plan:

In the end of the article, I’ll also shortly discuss the performance considerations required when choosing the appropriate command.

Update Statistics

 

Using the update statistics command, you can update the statistics about the distribution of key values for a table, an index on the table, specific columns of the table or on partitions.

Only reference the table

If you only reference a table name:

update statistics mytable

it will update statistics for the leading columns of all indexes on the table. The leading column of a composite index is the first column in the index definition. This is the most important one since the index cannot be used if only the second column is used in the WHERE clause but not the first one.

Here’s an example of such an index:

create index mytable_comp_ind on maytable(col1, col2)

In this case, col1 is the leading column for this index.

Explicitely reference an index

If you explicitely reference an index on the table:

update statistics mytable mytable_comp_ind

it will update the statistics for the leading column of the specified index i.e. it will update the statistics on col1.

Explicitely reference a partition

If you explicitely reference a partition on the table:

update statistics mytable partition mypartition

it will update the density information on all columns of all indexes of the partition. It will additionally also create histograms for all leading columns of indexes of the partition. You can also provide a list of column names. In this case it will create histograms for the first column and densities for the composite columns.

Note that only local indexes of the partition are considered. Global indexes are not considered.
Also note that updating the statistics for a partition also updates the global statistics i.e. only local indexes are considered but global statistics are updated with the gathered data.

Explicitely reference columns

If you explicitely reference a list of columns:

update statistics mytable (col1, col2)

it will create histograms for the first referenced column and density information for all column groups with the first column as leading column.

Note that performing such an update statistics has two consequences:

  1. A table scan is performed
  2. A sort is performed, meaning that you need to have enough space in tempdb

So this update statistics does have a high impact on the performance of the server and requires more space in tempdb.

Also note that in many cases statistics are mostly useful for columns referenced by indexes (especially as leading columns). Updating statistics for other columns create an overhead. But in some cases it is required and better than using update all statistics.

Update Index Statistics

 

The command update index statistics updates the statistics of all columns of an index:

update index statistics mytable mytable_comp_ind

The difference between this command and:

update statistics mytable(col1, col2)

is that the latter focuses on the lead column of the index. The former creates histograms not only for the leading column of the index but for all columns specified in the index definition.

You can also update the statistics for all columns of all indexes on the table by omitting the index name:

update index statistics mytable

Note that it is not exactly the same as calling update index statistics for each index on the table since the latter updates the statistics of columns referenced by multiple indexes multiple times. So if you intend to update the statistics for all indexes, it’s more performant to omit the index name than issuing one command per index.

Here also you can specify a data or index partition.

From a performance point view, this means that:

  1. An index scan will be performed to update the statistics of the leading column of the index
  2. For the other columns, a table scan for each of them will be required followed by a sorting based on a table in tempdb

This is basically the same problem as when using update statistics on columns which are not the leading column of an index.

Update All Statistics

 

This command creates statistics for all columns of the specified table:

update all statistics mytable

A partition can also additionally be specified.

From a performance point of view, this means that:

  1. An index scan will be performed for each column which is the leading column of an index
  2. A table scan will be performed for each other table, creating a work table in tempdb which will then be sorted

Update Table Statistics

 

The update table statistics is kind of a different beast. It doesn’t update the data in systabstats but only table or partition level statistics in sysstatistics i.e. it does not affect column level statistics.

Here the syntax:

update table statistics mytable

Or with a partition name:

update table statistics mytable mypartition

Modifying Statistics

 

sp_modifystats is a system stored procedure which can be used to update density information for a table of a column group.

There are two uses of this command.

Modify density information

You can use the MODIFY_DENSITY parameter to change the cell density information for columns:

sp_modifystats mytable, "col1%", range, absolute, "0.5"

This will set the range cell density to 0.5 for the column col1 as well as for all column group with density information having col1 as leading column.

The column group parameter can either contain:

  • “all” meaning that all column groups should be updated
  • A column name meaning that only the density information for this column should be updated
  • A list of columns meaning that the density information of this specified column group should be updated
  • A column name or column list including wildcards as shown above

Note that the next update statistics will reset the density information. So it’s a good idea to always call sp_modifystats after an update statistics.

Handling skew in data

Please refer to my previous post regarding data skew, their effect on queries and how to handle them: Data skew and query plans.

Performance

 

Updates of statistics are a required maintenance activities if you want to keep your queries fast. On the other hand, updating statistics also has a non-negligeable impact on performance and load.

In general the leading column of indexes are the critical ones. So you need to make sure they are always up-to-date.
But in some cases it does make sense to have up-to-date statistics also for other columns: If you have a WHERE clause also containing this column not being part of an index and 99% of the values in this column are the same, this will greatly impact how joins are done. Without statistics on this column, a default distribution will be assumed and the wrong join might be selected.

So I’d recommend:

  1. Using “update statistics” instead of the other commands to optimize the leading columns of indexes
  2. Using “update statistics” on specific columns which are not leading column in an index, in case you see that ASE chooses the wrong join because it assumes a default distribution of data and it is not the case. Use it only if really required as it creates a huge load on the system
  3. Avoid using “update all statistics”. It generally makes more sense to use dedicated commands to update what needs to be updated

More information regarding sampling, histogram steps and the degree of parallelism will be added later…

Sybase: Locks when creating indexes

Got the following question from a colleague:

We have a table with allpages lock scheme. A nonclustered index is added to this table (which has millions of entries). During the creation of the index, many blocked delete commands are observed. Would having another lock scheme (data page lock or data row lock) help ?

Answer:

The locking scheme of the table doesn’t matter in this case. When an index is created, a table lock is always created. If the index is clustered, then the lock is exclusive i.e. SELECT, INSERT, UPDATE and DELETE operations are blocked. Otherwise it is only a shared lock i.e. INSERT, UPDATE and DELETE statements are still blocked but SELECT statement are not blocked. In this case it is a nonclustered index but for a DELETE statement it makes no difference, the statement is still blocked.

Sybase: Find long lasting connections

In order to find which connections have been open for a long time, you can use the following SELECT statement:

SELECT
   spid,
   sl.name as 'login',
   sd.name as 'database',
   loggedindatetime,
   hostname,
   program_name,
   ipaddr,
   srl.remoteusername as 'remotelogin',
   ss.srvname as 'remoteservername',
   ss.srvnetname as 'remoteservernetname'
FROM
   master..sysprocesses sp,
   master..syslogins sl,
   master..sysdatabases sd,
   master..sysremotelogins srl,
   master..sysservers ss
WHERE
   sp.suid>0
   AND datediff(day,loggedindatetime,getdate())>=1
   AND sl.suid = sp.suid
   AND sd.dbid = sp.dbid
   AND sl.suid *= srl.suid
   AND srl.remoteserverid *= ss.srvid
ORDER BY
   loggedindatetime

Here a few explanations:

  • sp.suid>0 filters the ASE processes which run with suid=0
  • datediff(day,loggedindatetime,getdate())>=1 filters in only processes which logged in date/time is more than 1 day old
  • if the login is a remote login, the remote login and remote server information will be filled. It will contain null otherwise

Sybase: Performance issue with clustered index on allpages locked tables

When a table has a clustered index, ASE makes sure that all rows are physically stored in the order defined by the columns on which you have the clustered index. There can only be one clustered index on a given table as ASE cannot store the data with two different orders.

You can either explicitely created using the following command:

create clustered index myclusteredindex on mytable(mycolumn)

Or by defining a primary constraint:

create table mytable(
	mycol1 int not null,
	mycol2 int not null,
	primary key (mycol1)
)

Not that if you defined a primary key, the index name is automatically generated and you cannot drop the index using the DROP INDEX command but need to use the DROP CONSTRAINT command.

If the table has a data lock scheme, the table will be reorganized when the clustered index is created but the order of rows will not be further updated. If the table has an allpages lock scheme, then ASE will make sure that the order is maintained.

Note that although clustered indexes (especially when defined through primary key constraints) are often created as unique indexes, it doesn’t have to be the case.

Reading from a table with allpages lock and a clustered index using the keys of the clustered index as criteria is almost always faster than without the clustered index. But writing to the table is slower since ASE needs to maintain the order. This can create huge performance issues when working with huge tables with many updates on the index keys or many inserts/deletes. In some cases (I observed a case on a table with 28 million entries), committing or rolling back changes on such a table can cause many physical IOs. If this is done in a transaction also involving updates on other tables this could cause many other connections to blocked. In the case I had observed, it took up to 30 minutes to finish some rollbacks. My assumption was that it is because ASE needs to reorganize the whole index which involves reading an writing many pages. In this case dropping the primary key constraints solved the problem. You can just replace the clustered index by a non-clustered one.

So I’d recommend in general not to use clustered index on huge allpages tables except if you are 100% sure that you need the extra read performance. The penalty while writing can be so huge that it cannot be compensated by the extra read speed.