Sybase ASE: using CPU affinity to prevent all engines from using the same CPU

Number of engines

ASE creates an OS-level process for each engine you define. You can change the number of engines by using the following command:

sp_configure "engine", N

(replace N by the number of engines you want to configure).

Usually, if you server is almost exclusively used for ASE and you have X CPU cores available, you will want to configure X-1 engines. So assuming you have a dedicated server with 4 CPU cores, you’ll want to configure 3 engines.

You can change this setting also in the SYBASE.cfg:

[Processors]
max online engines = 3
number of engines at startup = 3

The first line defines that there will be 3 engines and the second one that all 3 engines will be started by default.

Even though in many cases, it makes sense to set the same value to both parameters so that you automatically use all available engines. You can also set the second one to a lower value and benchmark the system with less engines and then bring one additional engine online after another.

Increasing the max number of online engines to higher number than the number of available logical CPU’s makes no sense. So I’d always recommend setting it to the total number of logical CPU’s or this number minus 1. Whether you bring them all online at startup or not depends on what else is running on the system and the specific workload you have on this server.

If you configure too many ASE engines for the underlying CPU’s, you will observe some significant loss of throughput. It is due to the high number of involuntary context switches.

Hyper-Threading

Hyper-Threading creates “virtual CPU’s”. So an application running on a system where Hyper-Threading is enabled will think that there are twice as many CPUs as physically available. ASE will not make any difference between virtual CPU’s and real CPU’s.

Although Hyper-Threading provides the ability run two ASE engines for one physical processor, you need to keep in mind that it is still not equivalent to running two engines with two physical processors.

In many cases, you should consider switching off Hyper-Threading. Except if you actually only have very few physical CPU cores available, HT will probably not bring you the expected benefits. You might run into problems because ASE doesn’t see that two CPU’s are basically running on the same physical CPU and it should rather distribute the load between physical CPU’s instead of distributing two queries to the 2 CPU’s running on same physical CPU. Also ASE could schedule queries run at the same time to run only on the virtual CPU’s while it would be better to run them on the real CPU’s (although theoretically, there should be no difference in performance between a virtual CPU and a real one).

But keep in mind that whether HT will bring performance benefits or on the contrary make you system slower really depends on the system itself. It highly depends on your actual hardware and workload. So benchmarking it on the specific system might still be a good idea.

ASE 15.7 comes with a threaded kernel. It takes advantage of threaded CPU architectures. It can thus reduce context switching to threads instead of processes, which brings a performance boost. But this is not related to Hyper-Threading.

Using the default kernel for ASE 15.7, each engine is a thread which lives in a thread pool instead of being an OS process (which was already the case for ASE on Windows even before ASE 15.7).

CPU Affinity

The processes for the different ASE engines have by default no affinity to the physical or virtual processors. Usually, it is not required to force any CPU affinity as ASE will handle it properly.

Here’s an example with 3 engines running and 3 heavy queries running in parallel:

ASE parallel queries good case

Here you see that there are 4 CPUs and 3 engines running using CPU0, CPU1 and CPU3. You can also press “f”, “j” and return in top to have an additional column displayed which will explicitely show which engine is using which CPU:

ASE parallel queries good case with CPU number

The column “p” shows that the 3 dataserver processes use the CPUs 0,1 and 3.

In some cases (not sure when or why this happens), you will see that all dataserver processes will be using the same CPU even though they are processing different queries. Since multiple tasks have to be handled by the same CPU, this will make each task slower and also cause alot of overhead due to task switching.

If this happens, you can use the a “dbcc tune” command to configure a CPU affinity. This can be done by using the following command:

dbcc tune(cpuaffinity, -1, "on")

The -1 parameter is the start CPU. This one will always be skipped. So setting it to -1 means that:

  • The first engine will be bound to CPU0
  • The second one to CPU1
  • The third one to CPU2

If you want to keep CPU0 for other processes, you’d use:

dbcc tune(cpuaffinity, 0, "on")

This will do the following:

  • The first engine will be bound to CPU1
  • The second one to CPU2
  • The third one to CPU3

After that you should see that all dataserver processes are using different CPUs.

Note: The setting will be active only until the ASE server is restarted. So the dbcc tune command must be reissued each time ASE is restarted.

Also note that some operating systems do not support CPU affinity. In this case, the dbcc tune command will be silently ignored.

Sybase ASE: Get one line for each value of a column

Let’s assume you have such a table:

CREATE TABLE benohead(SP1 int, SP2 int, SP3 int)

Column SP1 has non unique values and you want to keep only one row per unique SP1 value.

Assuming we have inserted the following values in the table:

INSERT INTO benohead VALUES(1,2,3)
INSERT INTO benohead VALUES(1,4,5)
INSERT INTO benohead VALUES(1,6,7)
INSERT INTO benohead VALUES(2,3,2)
INSERT INTO benohead VALUES(3,4,6)
INSERT INTO benohead VALUES(3,7,8)
INSERT INTO benohead VALUES(4,1,7)

It’d look like this:

SP1         SP2         SP3
----------- ----------- -----------
          1           2           3
          1           4           5
          1           6           7
          2           3           2
          3           4           6
          3           7           8
          4           1           7

Since SP2 and SP3 can have any value and you could also have rows where all 3 fields have the same value, it’s not so trivial to get a list looking like this:

SP1         SP2         SP3
----------- ----------- -----------
          1           6           7
          2           3           2
          3           7           8
          4           1           7

Even if the table is sorted, iterating through the rows and keeping track of the last SP1 you’ve seen will not help you since you cannot delete the second row because you do not have anything to identify it (like ROW_COUNT in Oracle).

One way to handle it is getting a list of unique SP1 values and their row count:

SELECT SP1, count(*) as rcount FROM benohead GROUP BY SP1

This will return something like this:

SP1         rcount
----------- -----------
          1           3
          2           1
          3           2
          4           1

You can then iterate through this and for each value of SP1 set a rowcount to rcount-1 and delete entries with that SP1 value. In the end, you’ll have one row per SP1 values. Of course, if you just need the data and do not want to actually clean up the table, you’ll have to do it on a copy of the table.

Instead of deleting, you can also iterate through the values of SP1 and fetch the top 1 row for this value:

SELECT TOP 1 SP1, SP2, SP3 FROM benohead WHERE SP1=1

If you had only one additional column (e.g. SP2), it’d be even easier, since you could just use MAX and GROUP BY:

SELECT SP1, MAX(SP2) AS SP2 FROM benohead GROUP BY SP1

which returns:

SP1         SP2
----------- -----------
          1           6
          2           3
          3           7
          4           1

Unfortunately this doesn’t scale to multiple columns. If you also have SP3, you cannot use MAX twice since you will then combinations which didn’t exist in the original table. Let’s insert an additional row:

INSERT INTO benohead VALUES(1,1,9)

The following statement:

SELECT SP1, MAX(SP2) AS SP2, MAX(SP3) AS SP3 FROM benohead GROUP BY SP1

will return:

 SP1         SP2         SP3
 ----------- ----------- -----------
           1           6           9
           2           3           2
           3           7           8
           4           1           7

Although we had no row with SP1=1, SP2=6 and SP3=9.

So if you don’t like the solution iterating and delete with rowcount, you’ll need to introduce a way to uniquely identify each row: an identity column.

You can add an identity column to the table:

ALTER TABLE benohead ADD ID int identity

And them select the required rows like this:

SELECT * from benohead b WHERE b.ID = (SELECT MAX(ID) FROM benohead b2 WHERE b.SP1=b2.SP1)

This will fetch for each value of SP1 the row with the highest ID.

Or you can create a temporary table with an indentity column:

SELECT ID=identity(1), SP1, SP2, SP3 INTO #benohead FROM benohead	

And then use a similar statement on the temporary table.

 

Sybase ASE Cookbook Update

I’ve written this cookbook about 10 months ago. It basically contains all the information about Sybase ASE I’ve documented on my blog over the years. I use it when I need offline access to some tricks (since I’m not getting younger, it’s sometimes useful to have a kind of brain dump somewhere). I also compiled it and published it here in the hope that someone else might find it useful.

I’ve just updated the cookbook with a few new things. But it’s really a small update.

This cookbook is still available for free. And I am still no professional writer and still cannot afford paying someone for proof-read it. So if you notice any mistakes, explanations which cannot be understood or anything like this, please leave a comment here or contact me at henri.benoit@gmail.com. I can’t guarantee how fast I can fix mistakes but I’ll do my best to do it in a timely manner.

Benohead Sybase ASE Cookbook

Sybase ASE: List all tables in the current database and their size

In order to get a list of all tables in the current database, you can filter the sysobjects table by type = ‘U’ e.g.:

select convert(varchar(30),o.name) AS table_name
from sysobjects o
where type = 'U'
order by table_name

In order to get the number of rows of each table, you can use the row_count function. It takes two arguments:

  • the database ID – you can get the ID of the current database using the db_id function
  • the object ID – it’s the id column in sysobjects

e.g.:

select convert(varchar(30),o.name) AS table_name,
row_count(db_id(), o.id) AS row_count
from sysobjects o
where type = 'U'
order by table_name

And in order to get some size information you can use the data_pages function. It will return the number of pages and you can then multiply it by the number of kilobyte per page e.g.:

select convert(varchar(30),o.name) AS table_name,
row_count(db_id(), o.id) AS row_count,
data_pages(db_id(), o.id, 0) AS pages,
data_pages(db_id(), o.id, 0) * (@@maxpagesize/1024) AS kbs
from sysobjects o
where type = 'U'
order by table_name

The first column returned by this statement contains the table name (if you have names longer than 30 characters, you should replace 30 by something higher), the number of rows, the number of data pages, the size in kilobytes.

If you have an ASE version older than 15, the statement above will not work but you can use the statement below instead:

select sysobjects.name,
Pages = sum(data_pgs(sysindexes.id, ioampg)),
Kbs = sum(data_pgs(sysindexes.id, ioampg)) * (@@maxpagesize/1024)
from sysindexes, sysobjects
where sysindexes.id = sysobjects.id
    and sysindexes.id > 100
    and (indid > 1)
group by sysobjects.name
order by sysobjects.name

This will return the table name, number of pages and size in kilobytes.

Sybase ASE: Left outer join

Sybase ASE supports both the old syntax and the newer SQL-92 syntax for left outer joins:

Old syntax:

SELECT * FROM table1, table2 WHERE table1.key*=table2.fkey

New syntax:

SELECT * FROM table1 LEFT JOIN table2 ON table1.key=table2.fkey

As long as you do not have other criteria, the results will be the same. But you might experience some differing results as soon as you add some other criteria e.g. the two following statements seem to do the same but do deliver different results:

1> select top 10 p.p_key, e.e_uid, e.field1 from table_p p, table_e e where p.p_key*=e.p_key and e.field1='V1'
2> go
 p_key       e_uid            field1
 ----------- ---------------- ----------------
           2 2005092612595815 V1
           2 2005030715593204 V1
           2 2005092614251692 V1
           4 NULL             NULL
           8 NULL             NULL
           9 NULL             NULL
          10 NULL             NULL
          11 NULL             NULL
          14 NULL             NULL
          15 NULL             NULL

(10 rows affected)
1> select top 10 p.p_key, e.e_uid, e.field1 from table_p p left join table_e e on p.p_key=e.p_key where e.field1='V1'
2> go
 p_key       e_uid            field1
 ----------- ---------------- ----------------
     1057606 2006100212531641 V1
     1093639 2006100215370890 V1
     1015380 2006100410065929 V1
     1093639 2006100215370949 V1
     1029807 2006100508354802 V1
     1029807 2006100508402832 V1
     1044378 2006100509331826 V1
     1092232 2006100510385895 V1
     1030314 2006100513585134 V1
     1093947 2006100606211859 V1

(10 rows affected)

The reason is that the database engines when executing the first statement does not only consider p.p_key=e.p_key as join criterion but also e.field1='V1'. So basically the first statement is equivalent to the following SQL-92 statement:

1> select top 10 p.p_key, e.e_uid, e.field1 from table_p p left join table_e e on p.p_key=e.p_key and e.field1='V1'
2> go
 p_key       e_uid            field1
 ----------- ---------------- ----------------
           2 2005092612595815 V1
           2 2005030715593204 V1
           2 2005092614251692 V1
           4 NULL             NULL
           8 NULL             NULL
           9 NULL             NULL
          10 NULL             NULL
          11 NULL             NULL
          14 NULL             NULL
          15 NULL             NULL

(10 rows affected)

Note that the second criterion is not in the where clause but in the on part.

So the old left outer join syntax is compacter but it is ambiguous as it doesn’t properly separate the join criteria and the where criteria. In case of a left outer join it makes a huge difference since the join criteria do not filter the returned rows but the where criteria do.

In most cases, the results you were after are the ones returned by the first and last queries above. But you should avoid the old left outer join syntax and try to use the SQL-92 syntax everywhere. It makes it clearer what you mean with the statement and can save some time searching why you did not get the output you were expecting. But also with the SQL-92 syntax you should carefully think whether you want to add a criterion to the join criteria or to the where clause (and as stated above in most cases when using a left outer join, the criteria on the joined tables should probably go in the join criteria).

Sybase ASE Cookbook

I’ve compiled in this short ebook all posts I’ve ever written regarding Sybase ASE and tried to cluster them a little bit. This ebook is full of information useful when you work daily or once in a while with Sybase ASE. Sybase ASE is a high-performance and scalable database management system but it is also complex and not always easy to understand.

After working for over 10 years with ASE, I’ve gathered a lot of information I share on a daily basis with colleagues and once in a while on my blog at http://benohead.com.

I provide this ebook in the hope to be helpful. It is thus available for free. Since I’m not a professional writer, I do not have a crew of people proof-reading it. So spelling might be not as good as it should be and I can’t be sure that everything in there is 100% accurate. If you find any mistake, please contact me at henri.benoit@gmail.com and I’ll do my best to update it.

You can download the cookbook here.

Sybase ASE: check contents of the procedure cache

In order to peek into the procedure cache, you can use the following dbcc command:

dbcc procbuf

In order to see the output on the console, use:

dbcc traceon(3604)
go
dbcc procbuf
go

You’ll see that the output is pretty extensive. If what you are after is which trigger and procedures are using space in the procedure cache and how much space it uses, you only are interested in the lines like:

...
    Total # of bytes used               : 1266320
...
pbname='sp_aux_getsize'   pbprocnum=1
...

You can thus execute it and grep for these two lines:

$SYBASE/OCS/bin/isql -Usa -Pxxxx << EOT | grep "pbname
Total # of bytes used"
dbcc traceon(3604)
go
dbcc procbuf
go
EOT

You of course need to replace xxxx by your actual password.

Then you will want to make it looks nicer:

  • Merge the two lines: awk ‘!(NR%2){print p” “$0}{p=$0}’
  • Display only the name and the size: awk ‘{ print $1″ “$9″ bytes”; }’ | sed “s/pbname=//g” | sed “s/’//g”
  • Sort by size: sed “s/’//g” | sort -k2 -n

Putting it all together:

$SYBASE/OCS/bin/isql -Usa -Pxxxx << EOT | grep "pbname
Total # of bytes used" | awk '!(NR%2){print p" "$0}{p=$0}' | awk '{ print $1" "$9" bytes"; }' | sed "s/pbname=//g" | sed "s/'//g" | sort -k2 -n 
dbcc traceon(3604)
go
dbcc procbuf
go
EOT

You will then see something like:

...
sp_jdbc_tables 62880 bytes
sp_getmessage 68668 bytes
sp_aux_getsize 80596 bytes
sp_mda 81433 bytes
sp_mda 81433 bytes
sp_drv_column_default 90144 bytes
sp_dbcc_run_deletehistory 133993 bytes
sp_lock 180467 bytes
sp_helpsegment 181499 bytes
sp_dbcc_run_summaryreport 207470 bytes
sp_modifystats 315854 bytes
sp_autoformat 339825 bytes
sp_spaceused 353572 bytes
sp_jdbc_columns 380403 bytes
sp_do_poolconfig 491584 bytes
sp_configure 823283 bytes

Sybase ASE: Using archive databases

Archive databases are used to access data from a backup file directly without having the restore the database. Let’s say you lost some data in a table but had many other changes to other tables since the last backup. Just loading the last backup is not an option since you’d lose everything since the last backup. Of course, if you work with transaction log dumps, you can reduce the loss of data but very often it’s still too much. Additionally, in some cases you know the data you want to reload have not changed since the last backup (i.e. some kind of master data). So the best solution would be to be able to keep the current database but just reload this one table. Or maybe you do not want to reload a complete table but just copy a few deleted lines back in a table.

That’s exactly what an archive database is for. You cannot dump an archive database. An archive database is just a normal database dump loaded in a special way so that you can access the data without having to do a regular load of the dump which would overwrite everything.

So what do you need in order to mount a database as an archive database. Well, you need two additional databases:

  1. A “scratch database”
  2. An archive database

The “scratch database” is a small database you need to store a system table called sysaltusages. This table maps the database dump files you are loading to the archive database.

The archive database is an additional database you need to store “modified pages”. Modified pages are pages which are created additionally to the pages stored in the dump files. These are e.g. the result of a recovery performed after loading the database dump. So this database is typically much smaller than the dump files you are loading. But it is difficult to tell upfront how big it will be.

So once you have loaded an archive database, the data you see come from these three sources:

  • The loaded dump files
  • The scratch database
  • The archive database

So let’s first create the two database (I assume here you have some devices available to create these databases).

First we need to create the scratch database:

use master
go
create database scratchdb on scratch_data_dev='100M' log on scratch_log_dev='100M'
go

This will create the scratch database and take in online. Then we need to mark this database as a scratch database:

sp_dboption 'scratch', 'scratch database', 'true'
go
use scratch
go
checkpoint
go

Then we need to create the archive database:

use master
go
create archive database archivedb on archive_data_dev='100M' with scratch_database = scratchdb
go

Replace scratch_data_dev, scratch_log_dev and archive_data_dev by the names of the devices you want to create the data and log fragments of the scratch database and the data fragment of the archive database.

Now we’re ready to load the dump. Just do it the way you would load the database to restore it but only load it to the just created archive database e.g.:

load database archivedb from '...'

You can of course also load the transaction logs with:

load transaction archivedb from '...'

Note that while loading the database dump or the transaction log dumps, you might get error message saying that either the transaction log of the scratch database or the modified pages section of the archive database run full e.g.:

There is no more space in the modified pages section for the archive database ‘pdir_archive_db’. Use the ALTER DATABASE command to increase the amount of space available to the database.

Depending on the message you get, you’ll have to add more space for the transaction log of the scratch database or extend the archive database using alter database. Note that ASE usually gives you a chance to do it before aborting. But at some point in time, it will probably abort, so do not take your time 😉

If you do not care about the recovery and have limited storage available for the archive database you can use:

load database archivedb from '...' with norecovery

Loading with norecovery also reduces the time required to load. Also the database is automatically brought online (this also means you cannot load additional transaction logs). The downside is that the database might be inconsistent (from a physical and transactional point of view).

If you did not use the norecovery option, you have to bring the archive database online:

online database archivedb

Once you are done with the archive database and do not need it anymore, you can just drop both databases:

drop database archivedb
drop database scratchdb

Sybase: Reset the identity counter on a table

When working with an identity column, a counter per table is maintained. This counter is used to generate new values for the identity column of the table.
When you load many data into a table as a test and then truncate the table, the identity counter is not reset. So if you inserted 1000000 rows, truncated the table and inserted an entry, you’d then the value 1000001 for the new entry.

If you want to reset the identity counter for a table in order to have the next value of the identity column be 1 instead, you can change the identity_burn_max attribute for the table e.g.:

sp_chgattribute my_table, 'identity_burn_max', 0, '0'

Please note that this command creates an exclusive table lock on the table. The lock is not kept for long but this means that the command will be blocked by any read or write lock on the table.

The last parameter is the identity counter you want to set. So if you have kept the entries with the identity values 1 to 10 and deleted the rest, you’d have to set the identiy_burn_max attribute to 10:

sp_chgattribute my_table, 'identity_burn_max', 0, '10'

If you try to set it to a lower value (i.e. a value lower than the maximum value already in use in the table), sp_chgattribute will fail, refusing to update the attribute because you then risk having duplicate values in there.

You can work around it by directly setting the attribute using dbcc:

dbcc set_identity_burn_max(my_database, my_table, 'my_new_value')

Also note that if what you want is to actually create an identity gap, all you have to do to increase the counter is to allow identity inserts on the table, insert a higher value and delete it:

SET IDENTITY_INSERT my_table ON
go
INSERT INTO my_table(my_identity_column,...) VALUES (1785,...)
go
SET IDENTITY_INSERT my_table OFF
go
DELETE FROM my_table WHERE my_identity_column=1785
go

Sybase: transaction log dump and non-logged operations

Most of you probably already know this but just to make sure… There are three kinds of operations from a transaction log perspective:

  1. logged operations: those are operations which are completely logged in the transaction log.
  2. minimally logged or non-logged operations which are logged in the transaction log but not every change performed by this operation is logged. They do not prevent a dump and restore of the transaction logged
  3. minimally logged or non-logged operations which are not logged in the transaction log at all.

Here are a few example of each type of operations:

  1. insert into, update, delete
  2. truncate table
  3. select into, fast bcp, parallel sort

When an operation of the third category is performed, since the transaction log entries are missing, a dump and restore of the transaction log only is not possible anymore. This means ASE is not able to recover the database in a disaster recovery scenario unless you take a full dump of the database. Since the dumped transaction log does not contain the required information, ASE prevents you from dumping the transaction log once one of these operations has been performed because you couldn’t use the dumped log to recover the database anyway. Many people tend to think that truncate table also prevents a transaction log dump, which is not the true. Truncate table does not log every deletion in the table and is thus not a fully logged operation but it does log all page deallocations in the transaction log so that it’s still possible to reconstruct the database. So if you rely on a transaction log dump to recover the database or if you use it to migrated data from a system to the other one, it is important to:

  • prevent such operations to happen
  • check whether such operations have been performed

prevent such operations to happen

This can be done by setting an option on the database:

master..sp_dboption mydb, "select into/bulkcopy/pllsort", false
go
use mydb
go
checkpoint
go

Replace mydb by the name of the database you want to prevent such operations on. With this option set select into, fast bcp and parallel sort operations will not be allowed anymore on this database.

check whether such operations have been performed

You can use the following query if such operations are not prevented as shown above.

select tran_dumpable_status('mydb')

If it returns 0, then everything is fine and a dump of the transaction log should work fine. Otherwise, such an operation was performed and it is not possible to dump the transaction log until a full dump of the database is performed.

If the returned value is not 0, you can find out exactly what happens by interpreting the return bit mask:

  • 1 – the specified database name was not found.
  • 2 – there is no separate log device for this database.
  • 4 – the log first page is on a data-only disk fragment.
  • 8 – the trunc log on chkpt option is set for the database.
  • 16 – an operation preventing a dump of the transaction log was performed.
  • 32 – a transaction log dump with truncate_only was performed.
  • 64 – it’s a new database and a full database dump must be performed before you can dump the transaction log.

Here are a few explanations on the different values:

  • 1 – this mostly means that you’ve misspelled the database name.
  • 2 or 4 – the transaction log needs to be entirely on a separate transaction log device for the dump of the transaction log to work.
  • 8 – the database is configured to truncate the transaction log on checkpoint. This means that the transaction log is mostly empty except for things which happened since the last checkpoint. A dump of the transaction log then makes no sense, since most of the changes performed have already been discarded. see this post for more information.
  • 16 – a select into, fast bcp or parallel sort has been performed and the transaction log would be useless anyway.
  • 32 – the transaction log has been truncated by this command and the chain has been broken.
  • 64 – the transaction log contains the changes applied since the last full dump so you need at least one full dump in order to use it.

If you’re still working with a pre-15 (e.g. 12.5) version of ASE, the tran_dumpable_status will not be available as it was introduced in ASE 15. Instead, you’ll have to read from the dbinfo structure. This can be done using:

dbcc traceon (3604)
go
dbcc dbinfo (my_db_name)
go

Replacing my_db_name by the name of the database you want to check. And then look for offset 128=dbi_ldstate. If the value is 4, 8 or 12, it means you cannot dump the transaction log for this database. Here’s what the different values mean:

  • 4 (DBI_NONLOGGED_WRITES) means that a non-logged write was performed
  • 8 (DBI_TRUNCATED_ONLY) means that the log was dumped with truncate_only
  • 12 means both

In all three cases, a normal dump of the transaction log will not be possible.

The dbinfo structure is also accessible through the keys1 column of sysindex for syslog (id = 8). So you can alternatively use the following statement:

SELECT CONVERT(BINARY,SUBSTRING(keys1,129,1)) & 12 FROM my_db_name..sysindexes WHERE id = 8

Replacing my_db_name by the name of the database you want to check. This will also return 0, 4, 8 or 12. It is possible to perform a normal dump of the transaction log only if it returns 0.

So if you do not need any of the operations which prevent the transaction log from being dumped, the best solution is to prevent them on the database level. Otherwise, when you need to perform a transaction log dump (e.g. because the transaction is full or because you need to migrate the changes to a replicated database), you should first check whether a transaction log dump would be possible and trigger a full dump instead if not.