Oracle 23c – Reservable Columns

This article is about the major new feature in Oracle 23c with the goal of alleviating locking problems. Basically, there is a queue table and the operations are processed sequentially, without locking. To demonstrate, let’s create our table and populate it:

SQL> create table test_tab (
2 c1 integer default on null test_seq.nextval primary key,
3 c2 varchar2(20),
4* c3 integer);

Table TEST_TAB created.
SQL> insert into test_tab(c1,c2,c3) values (null,'qwerty',1);

1 row inserted.

Elapsed: 00:00:00.014
SQL> commit;
Commit complete.

So, now we have a table with 1 row in it. The table is completely classic, no special features. Now, let’s update the table without committing and open another session, which will try to get the lock:

Session 1:
SQL> update test_tab set c3=2 where c1=1539;

1 row updated.

Elapsed: 00:00:00.012
SQL>

Session 2:
Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

SQL> select c3 from test_tab where c1=1539 for update of c3 nowait;
select c3 from test_tab where c1=1539 for update of c3 nowait
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

That is completely normal, that is what is expected of any relational database. Now, let’s play a little trick:

SQL> alter table test_tab modify c3 reservable;

Table TEST_TAB altered.

Elapsed: 00:00:00.082

That will change the behavior drastically:

Session 1:

SQL> update test_tab set c3=c3+1 where c1=1539;

1 row updated.

Session 2:

SQL> select c3 from test_tab where c1=1539 for update of c3 nowait;

    C3
----------
     2

Elapsed: 00:00:00.00

So, it appears that there is no lock on the row created by the UPDATE statement? Let’s check locks:

SQL> select sid from v$session where username='SCOTT';

    SID 
_______ 
      4 
   1017 

Elapsed: 00:00:00.004
SQL> select sid,id1,id2,type,lmode,request from v$lock where sid in (4,1017);

    SID       ID1           ID2 TYPE       LMODE    REQUEST 
_______ _________ _____________ _______ ________ __________ 
   1017    524301           693 TX             6          0 
      4     85826             0 TM             3          0 
      4     85824             0 TM             3          0 
   1017     85824             0 TM             3          0 
      4    393220           651 TX             6          0 
      4       138    1744658490 AE             4          0 
   1017       138    1744658490 AE             4          0 

7 rows selected. 

Elapsed: 00:00:00.113

So, let’s first check the “TM” locks. Those are table locks and will tell us what is going on under the hood:

SQL> select object_name,object_type,object_id from dba_objects where object_id in (85824,85826) and owner='SCOTT';

OBJECT_NAME OBJECT_TYPE OBJECT_ID
_______________________ ______________ ____________
TEST_TAB TABLE 85824
SYS_RESERVJRNL_85824 TABLE 85826

So, there are two tables involved here? I was updating just TEST_TAB, where does this 2nd table come from? Let’s see:

SQL> desc SYS_RESERVJRNL_85824
 Name                       Null?    Type
 ----------------------------------------- -------- ----------------------------
 ORA_SAGA_ID$                        RAW(16)
 ORA_TXN_ID$                        RAW(8)
 ORA_STATUS$                        CHAR(12)
 ORA_STMT_TYPE$                     CHAR(16)
 C1                       NOT NULL NUMBER(38)
 C3_OP                            CHAR(7)
 C3_RESERVED                        NUMBER(38)

So, this is a journal table which contains the information about the transaction, its status and operation. Here is the contents of the table:

QL> select * from SYS_RESERVJRNL_85824;

ORA_SAGA_ID$             ORA_TXN_ID$     ORA_STATUS$  ORA_STMT_TYPE$
-------------------------------- ---------------- ------------ ----------------
    C1 C3_OP   C3_RESERVED
---------- ------- -----------
                 08000D00B5020000 ACTIVE       UPDATE
      1539 +             1

Basically, the operation is written to the journal and applied by the background processes. I was not yet able to figure out which process is responsible for that. And that is the new feature: a way to bypass the locking contention. However, there are serious limitations Here are some:

  • It is only possible to make NUMBER, INTEGER or FLOAT columns reservable
  • It is not possible to drop the table with reservable columns. The column should be made NOT RESERVABLE before dropping the table.
  • The only possible assignment is COL=COL <op> CONSTANT like C3 = C3 + 1. It is not possible to use constant value in the assignment.

Update limiteation:

update test_tab set c3=1024 where c1=1539
Error at Command Line : 1 Column : 8
Error report -
SQL Error: ORA-55746: Reservable column update statement only supports + or - operations on
a reservable column.
55746. 00000 -  "Reservable column update statement only supports + or - operations on a reservable column."
*Document: YES
*Cause:    An attempt is being made to update a reservable column with something
           other than a + or -. An amount should be added or subtracted from
           the reservable column. Direct assignments to reservable columns
           are not supported.
*Action:   Change the update statement to add or subtract from the same
           reservable column. The set clause should be of the form
           reservable_col1 = reservable_col1 + (<expression>) or
           reservable_col1 = reservable_col1 - (<expression>)."
           where the expression in parenthesis evaluates to the amount to be
           added or subtracted from the same reservable column,
           reservable_col1.

Dropping the table:

QL> drop table test_tab;

Error starting at line : 1 in command -
drop table test_tab
Error report -
ORA-55764: Cannot DROP or MOVE tables with reservable columns. First run "ALTER
TABLE <table_name> MODIFY (<reservable_column_name> NOT RESERVABLE)" and then
DROP or MOVE the table.
55764. 00000 -  "Cannot DROP or MOVE tables with reservable columns. First run \"ALTER TABLE <table_name> MODIFY (<reservable_column_name> NOT RESERVABLE)\" and then DROP or MOVE the table."
*Document: YES
*Cause:    An attempt is being made to DROP or MOVE a table with reservable columns.
*Action:   Run ALTER TABLE <table_name> MODIFY (<reservable_column_name>
           NOT RESERVABLE) before issuing a DROP or MOVE of the table.
Elapsed: 00:00:00.066

Datatype limitation:

SQL> alter table test_tab modify c2 reservable;

Error starting at line : 1 in command -
alter table test_tab modify c2 reservable
Error report -
ORA-55748: Reservable column property specified on column "C2" is supported only
on columns of data types Oracle NUMBER, INTEGER, and FLOAT.
55748. 00000 -  "Reservable column property specified on column \"%s\" is supported only on columns of data types Oracle NUMBER, INTEGER, and FLOAT."
*Document: YES
*Cause:    An attempt is being made to specify a reservable column with a data type
           other than Oracle NUMBER, INTEGER, or FLOAT.
*Action:   Use Oracle NUMBER, INTEGER, or FLOAT data types for reservable
           columns.
Elapsed: 00:00:00.019

This feature looks promising. However, this is very narrow feature of limited usability right now. I am sure that Oracle will build on it in the next release.

Posted in Uncategorized | Tagged , | Leave a comment

SQLCL named connections

Few years ago, Oracle Corp. has created sqlcl, Java-based client, for the most part compatible with the venerable sqlplus and an extension of the venerable SQL*Developer. Howeverm the tool has several additional capabilities, not present in the sqlplus. One of them is the ability to impoert connections from SQL*Developer. So, here is the list of the connections from my SQL*Developer:

The connections can be imported into sqlcl. SQL*Developer holds the connection descriptions in the file called connections.json:

find .sqldeveloper -name connections.json
.sqldeveloper/system20.2.0.175.1842/o.jdeveloper.db.connection/connections.json
.sqldeveloper/system20.4.0.379.2205/o.jdeveloper.db.connection/connections.json
.sqldeveloper/system20.4.1.407.0006/o.jdeveloper.db.connection/connections.json
.sqldeveloper/system21.2.0.187.1842/o.jdeveloper.db.connection/connections.json
.sqldeveloper/system21.2.1.204.1703/o.jdeveloper.db.connection/connections.json
.sqldeveloper/system21.4.2.018.1706/o.jdeveloper.db.connection/connections.json
.sqldeveloper/system21.4.3.063.0100/o.jdeveloper.db.connection/connections.json
.sqldeveloper/system22.2.0.173.2018/o.jdeveloper.db.connection/connections.json
.sqldeveloper/system22.2.1.234.1810/o.jdeveloper.db.connection/connections.json
.sqldeveloper/system23.1.0.097.1607/o.jdeveloper.db.connection/connections.json

This file can be imported into the sqlcl:

mgogala@umajor:~$ sql /nolog

SQLcl: Release 23.3 Production on Sat Nov 25 18:12:23 2023

Copyright (c) 1982, 2023, Oracle. All rights reserved.
SQL> connmgr import .sqldeveloper/system23.1.0.097.1607/o.jdeveloper.db.connection/connections.json -d REPLACE
Importing connection system-ora19c: Success
Importing connection system-23c: Success
Importing connection system-l23c: Success
Importing connection scott-l21c: Success
4 connection(s) processed

The “-d REPLACE” option was necessary because I’ve imported the connections before and I needed to overwrite them. Unfortunately, the connmgr utility doesn’t have the ability to delete stored connections. Here is the list of the available commands:

More help topics:
  CONNMGR EXAMPLES
  CONNMGR IMPORT
  CONNMGR LIST
  CONNMGR SHOW
  CONNMGR TEST
  CONNMGR CLONE

Unfortunately, the “RESET” command which would wipe out all the named connections or “DELETE” command which would remove one particular named connection do not exist – yet. SQL*Developer and SQLCl tools have an awesome program manager named Jeff Smith. His blog is here: http://www.thatjeffsmith.com

I wouldn’t put the implementation of the RESET and DELETE commands past Jeff’s team. Now, back to the tool. It is possible to list the named connections and examine them:

SQL> connmgr list
scott-l21c
system-23c
system-l23c
system-ora19c
SQL> 
SQL> connmgr show scott-l21c
Name: scott-l21c
Connect String: localhost:1521/orclpdb1
User: scott
Password: not saved

Connecting to the named connection is easy:

SQL> connect -n scott-l21c
Password? (**?) *****
Connected.
SQL> show user
USER is "SCOTT"
SQL> 

The “-n” argument is necessary because it signifies connection to the named connection. Another thing missing is the ability to do tnsping on the named connection. That still doesn’t work but will probably be added sooner, rather than later. SQLCl has several advantages over sqlplus:

  • Formatting capabilities not present in sqlplus
  • REPEAT command which makes it possible to monitor various things.
  • The ability to navigate SQL command history using the arrow keyes.
  • Connecting to the named connections.

There are other commands, not supported by sqlplus, like orapki and tnsping. After very short period of its existence, SQLCl has become a serious competitor to sqlplus. I wouldn’t be overly surprised if one nice day SQLCl replaces the sqlplus tool. SQLCl is a Java program and the latest versions require Java 11 or newer.

Posted in Uncategorized | Leave a comment

Cancel SQL

Today, I saw an article on LinkedIn, by Chris Saxon of the Oracle fame. Chris has reminded everybody that it is possible to cancel the running SQL, without actually killing the session. The command is rather well known and documented:

ALTER SYSTEM CANCEL SQL ‘sid, serial#’

The format is almost the same as for ALTER SYSTEM KILL SESSION command. However, relatively few people know that canceling SQL was possible since Oracle 9i. You could cancel the running SQL by delivering the SIGURG signal to the process executing SQL. Here is how it works:

Basically, I have run a very expensive SQL and located the process executing it. It was the process 673. The last line shows sending the SIGURG signal to the process. The same effect could have been achieved by sending signal 23, like this: kill -23 673. However, I wanted to stress the name of the signal. The result looks like this:

The same result could have been achieved by executing the command Chris was talking about:

SQL> select sid,serial# from v$session where username='SCOTT';

       SID    SERIAL#
---------- ----------
       861	64810

SQL> alter system cancel sql '861,64810';

System altered.

The ALTER SYSTEM CANCEL SQL statement is preferred to sending signals because it is possible to execute it from a client, while signal can only be sent if you are logged into the database server as oracle user.

My point is that this capability is not exactly new. However, it is now much more polished and much more usable. My assumption is that the mechanism is the same. Instead of the direct command given from an interactive shell, the signal is generated by the PMON and delivered to the executing process. Since I don’t have access to the source code, I can only make assumptions. Somebody working for Oracle would have to verify my assumption.

Posted in Uncategorized | Leave a comment

Result cache on the new Oracle versions (21c and 23c)

Oracle result cache is standard, ever since Oracle 11g. Oracle versions between 11g and 19c have two major parameters regulating the result cache:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.20.0.0.0

SQL> show parameter result_cache
NAME                           TYPE        VALUE  
------------------------------ ----------- ------ 
client_result_cache_lag        big integer 3000   
client_result_cache_size       big integer 0      
result_cache_max_result        integer     5      
result_cache_max_size          big integer 24256K 
result_cache_mode              string      MANUAL 

The main parameters are result_cache_max_size and result_cache_max_result, defining the size of the result cache in the SGA and the maximum amount of space that a single query can take in the result cache. There is no backing up of the cache to disk. That is the crucial innovation of Oracle 21c. Here are the parameters:

Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.11.0.0.0

SQL> show parameter result_cache
NAME                             TYPE        VALUE  
-------------------------------- ----------- ------ 
client_result_cache_lag          big integer 3000   
client_result_cache_size         big integer 0      
result_cache_execution_threshold integer     2      
result_cache_max_result          integer     5      
result_cache_max_size            big integer 22M    
result_cache_max_temp_result     integer     5      
result_cache_max_temp_size       big integer 220M   
result_cache_mode                string      MANUAL 
result_cache_remote_expiration   integer     0   

So, we can see that there are some new parameters here:

result_cache_max_temp_result     integer     5      
result_cache_max_temp_size       big integer 220M   

Those two parameters are documented:

RESULT_CACHE_MAX_TEMP_SIZE

RESULT_CACHE_MAX_TEMP_SIZE specifies the maximum amount of temporary tablespace (in bytes) that can be consumed by the result cache.
Property 	Description

Parameter type
	

Big integer

Syntax
	

RESULT_CACHE_MAX_TEMP_SIZE = integer [K | M | G]

Default value
	

RESULT_CACHE_SIZE * 10

Modifiable
	

ALTER SYSTEM

Modifiable in a PDB
	

Yes

Range of values
	

0 to operating system-dependent

Basic
	

No

Oracle RAC
	

You must either set this parameter to 0 on all instances to disable the result cache, or use a nonzero value on all instances. Disabling the result cache on some instances may lead to incorrect results. 

It appears that Oracle 21c and above use TEMP tablespace to back up the result cache. Result cache is managed by the RCBG process:

SQL> select name,description 
  2  from v$bgprocess
  3* where lower(description) like '%result%';

NAME    DESCRIPTION                 
_______ ___________________________ 
RCBG    Result Cache: Background    

That means that there is something, most probably the RCBG process, that backs up the result cache to the TEMP tablespace. I was unable to find out under what conditions is the result cache saved to the TEMP tablespace and how is it retrieved. Without that information, the new feature is dangerous to use because it can slow down the application execution. Reading from RAM and reading from TEMP tablespace is not the same. I’ll wait until Oracle documents the new feature properly. If they don’t that can only mean that the feature is not intended for the general use.

Posted in Uncategorized | Leave a comment

Podman issue on kernel 6.3.5

I a m running Fedora 38 XFCE on my desktop. I use containers for all of my databases because containers are much smaller than virtual machine.. So, I have the following configuration:

CONTAINER ID   IMAGE                 COMMAND                  CREATED       STATUS                     PORTS     NAMES
fcab2c405a5a   localhost/ora19:19    "/usr/local/bin/runD…"   5 hours ago   Exited (137) 4 hours ago             ora19-19
02b9f5d623e3   localhost/ora23c:2    "/usr/local/bin/runD…"   6 hours ago   Exited (137) 5 hours ago             ora23c
2aa392d8da93   localhost/pgsql15:3   "postgres"               7 hours ago   Exited (0) 7 hours ago               pgsql15-3
[mgogala@umajor ~]$ 

The names are self-explanatory and so are the versions. After upgrading my kernel to 6.3.5, I couldn’t start any container. Podman was complaining about insufficient privilege and inability to raise limit for files and proceses.

I couldn’t fix that with podman, no matter what did I do. I installed docker-ce, rebuilt all of my containers and now, docker starts them without issue. I still have a problem with docker daemon running in the root mode, which makes the system significantly less secure.

Posted in Uncategorized | Leave a comment

Oracle 23c – SQL_ID tracing.

In Oracle 12.2, Oracle Corp. added the ability to run SQL trace on specific SQL_ID. So, let’s get SQL_ID for tracing:

SQL> select sql_id,sql_text from v$sqlstats where lower(sql_text) like 'select%from emp%';

SQL_ID           SQL_TEXT                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
________________ __________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ 
0a56fma6rh6c8    SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel  */ NVL(SUM(C1),0) FROM (SELECT /*+ qb_name("innerQuery")  */ 1 AS C1 FROM (SELECT "X$KGLCURSOR_CHILD"."KGLOBT03" "SQL_ID","X$KGLCURSOR_CHILD"."KGLNAOBJ" "SQL_TEXT" FROM "SYS"."X$KGLCURSOR_CHILD" "X$KGLCURSOR_CHILD" WHERE ("X$KGLCURSOR_CHILD"."CON_ID"=0 OR "X$KGLCURSOR_CHILD"."CON_ID"=3) AND LOWER("X$KGLCURSOR_CHILD"."KGLNAOBJ") LIKE 'select%from emp%' AND "X$KGLCURSOR_CHILD"."INST_ID"=USERENV('INSTANCE')) "VW_QBEST_Q") innerQuery    
g5hn8a7munp3u    select sql_id,sql_text from v$sql where lower(sql_text) like 'select%from emp%'                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
08xtxky9ths2s    SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel  */ NVL(SUM(C1),0) FROM (SELECT /*+ qb_name("innerQuery")  */ 1 AS C1 FROM (SELECT "X$KKSSQLSTAT"."SQL_ID" "SQL_ID","X$KKSSQLSTAT"."SQL_TEXT" "SQL_TEXT" FROM "SYS"."X$KKSSQLSTAT" "X$KKSSQLSTAT" WHERE LOWER("X$KKSSQLSTAT"."SQL_TEXT") LIKE 'select%from emp%' AND ("X$KKSSQLSTAT"."CON_ID"=0 OR "X$KKSSQLSTAT"."CON_ID"=3) AND "X$KKSSQLSTAT"."INST_ID"=USERENV('INSTANCE')) "VW_QBEST_Q") innerQuery                                              
b1bsmpmm2stn0    SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel  */ NVL(SUM(C1),0) FROM (SELECT /*+ qb_name("innerQuery")  */ 1 AS C1 FROM "SYS"."X$KKSSQLSTAT" "X$KKSSQLSTAT" WHERE (LOWER("X$KKSSQLSTAT"."SQL_TEXT") LIKE 'select%from emp%') AND ("X$KKSSQLSTAT"."CON_ID"=0 OR "X$KKSSQLSTAT"."CON_ID"=3)) innerQuery                                                                                                                                                                                              
484mc6v3sxqrj    SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel  */ NVL(SUM(C1),0) FROM (SELECT /*+ qb_name("innerQuery")  */ 1 AS C1 FROM "SYS"."X$KGLCURSOR_CHILD" "X$KGLCURSOR_CHILD" WHERE (LOWER("X$KGLCURSOR_CHILD"."KGLNAOBJ") LIKE 'select%from emp%') AND ("X$KGLCURSOR_CHILD"."CON_ID"=0 OR "X$KGLCURSOR_CHILD"."CON_ID"=3)) innerQuery                                                                                                                                                                     
a2dk8bdn0ujx7    select * from emp                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          
43vmxqgq37j7m    select sql_id,sql_text from v$sqlstats where lower(sql_text) like 'select%from emp%'                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

7 rows selected. 

OK, we now have SQL_ID=’a2dk8bdn0ujx7′ for SQL being “select * from emp”. The “old” mechanism, if 12.2 invention can be called that, looks like this:

SQL> alter system set events='SQL_TRACE[SQL:a2dk8bdn0ujx7] wait=true';

System SET altered.

The trace file can be conveniently located from V$DIAG_INFO

SQL> select value from v$diag_info where name='Default Trace File';

VALUE                                                      
__________________________________________________________ 
/opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_638.trc    

The file looks just like we would expect:

    Trace file /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_638.trc
      Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
      Version 23.2.0.0.0
      Build label:    RDBMS_23.2.0.0.0_LINUX.X64_230325
      ORACLE_HOME:    /opt/oracle/product/23c/dbhomeFree
      System name:      Linux
      Node name:        umajor
      Release:  6.3.4-201.fc38.x86_64
      Version:  #1 SMP PREEMPT_DYNAMIC Sat May 27 15:08:36 UTC 2023
      Machine:  x86_64
      CLID:     P
      Instance name: FREE
      Redo thread mounted by this instance: 1
      Oracle process number: 105
      Unix process pid: 638, NID: 4026533011, image: oracle@umajor


      *** 2023-06-02T16:14:52.580358+00:00 (FREEPDB1(3))
      *** SESSION ID:(1034.30759) 2023-06-02T16:14:52.580395+00:00
      *** CLIENT ID:() 2023-06-02T16:14:52.580415+00:00
      *** SERVICE NAME:(freepdb1) 2023-06-02T16:14:52.580431+00:00
     *** MODULE NAME:(SQLcl) 2023-06-02T16:14:52.580446+00:00
      *** ACTION NAME:() 2023-06-02T16:14:52.580462+00:00
      *** CLIENT DRIVER:(jdbcthin : 21.10.0.0.0) 2023-06-02T16:14:52.580475+00:00
      *** CONTAINER ID:(3) 2023-06-02T16:14:52.580491+00:00
      *** CLIENT IP:(127.0.0.1) 2023-06-02T16:14:52.580506+00:00

      =====================
      PARSING IN CURSOR #140695102917112 len=678 dep=1 uid=0 oct=3 lid=0 tim=11592632601 hv=3152351506 ad='6bd42430' sqlid='ga4mbqqxya48k'
      select t.ts#,t.file#,t.block#,nvl(t.bobj#,0),nvl(t.tab#,0),t.intcols,nvl(t.clucols,0),t.flags,t.pctfree$,t.pctused$,t.initrans,t.maxtrans,t.rowc
      nt,t.blkcnt,t.empcnt,t.avgspc,t.chncnt,t.avgrln,t.analyzetime,t.samplesize,t.cols,t.property,nvl(t.degree,1),nvl(t.instances,1),t.avgspc_flb,t.f
      lbcnt,t.kernelcols,nvl(t.trigflag, 0),nvl(t.spare1,0),nvl(t.spare2,0),t.spare4,t.spare6,nvl(t.spare7,0),ts.cachedblk,ts.cachehit,ts.logicalread,
      ts.im_imcu_count,ts.im_block_count,ts.im_sys_incarnation,ts.im_stat_update_time,ts.scanrate,nvl(t.acdrflags, 0),nvl(t.acdrtsobj#, 0),t.acdrdefau
      lttime,nvl(t.acdrrowtsintcol#, 0) from tab$ t, tab_stats$ ts where t.obj#= :1 and t.obj# = ts.obj# (+)
      END OF STMT
      EXEC #140695102917112:c=44,e=44,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2035254952,tim=11592632595
......

That is very well known and nothing to blog about. However, in Oracle 23c, there is a new package called DBMS_USERDIAG. The package looks very practical and promising:

SQL> desc dbms_userdiag
PROCEDURE CHECK_SQL_TRACE_EVENT
Argument Name    Type              In/Out    Default?    
________________ _________________ _________ ___________ 
LEVEL            BINARY_INTEGER    OUT                   
SQL_ID           VARCHAR2          IN        DEFAULT     
SYS              BINARY_INTEGER    IN        DEFAULT     

PROCEDURE ENABLE_SQL_TRACE_EVENT
Argument Name    Type              In/Out    Default?    
________________ _________________ _________ ___________ 
LEVEL            BINARY_INTEGER    IN        DEFAULT     
SID              BINARY_INTEGER    IN        DEFAULT     
SER              BINARY_INTEGER    IN        DEFAULT     
BINDS            BINARY_INTEGER    IN        DEFAULT     
WAITS            BINARY_INTEGER    IN        DEFAULT     
PLAN_STAT        VARCHAR2          IN        DEFAULT     
SQL_ID           VARCHAR2          IN        DEFAULT     
DISABLE          BINARY_INTEGER    IN        DEFAULT     
SYS              BINARY_INTEGER    IN        DEFAULT     

FUNCTION GET_CALL_ERROR_MSG RETURNS VARCHAR2

FUNCTION GET_CALL_STATUS RETURNS NUMBER

PROCEDURE SET_EXCEPTION_MODE
Argument Name    Type       In/Out    Default?    
________________ __________ _________ ___________ 
EXC_MODE         BOOLEAN    IN        DEFAULT     

PROCEDURE SET_TRACEFILE_IDENTIFIER
Argument Name     Type        In/Out    Default?    
_________________ ___________ _________ ___________ 
TRC_IDENTIFIER    VARCHAR2    IN                    

PROCEDURE TRACE
Argument Name    Type              In/Out    Default?    
________________ _________________ _________ ___________ 
MESSAGE          VARCHAR2          IN                    
ALERT            BINARY_INTEGER    IN        DEFAULT  

So, let’s turn off tracing for the given statement id:

SQL> alter system set events='SQL_TRACE[SQL:a2dk8bdn0ujx7] level=0';

System SET altered.

Now, let’s try with the DBMS_USERDIAG package:

SQL> exec DBMS_USERDIAG.ENABLE_SQL_TRACE_EVENT(waits=>1,sql_id=>'a2dk8bdn0ujx7');


PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03

Now, we need to re-execute the statement and check the trace file:

SQL> select * from emp;

   EMPNO ENAME     JOB              MGR HIREDATE        SAL    COMM    DEPTNO 
________ _________ ____________ _______ ___________ _______ _______ _________ 
    7369 SMITH     CLERK           7902 19801217        800                20 
    7499 ALLEN     SALESMAN        7698 19810220       1600     300        30 
    7521 WARD      SALESMAN        7698 19810222       1250     500        30 
    7566 JONES     MANAGER         7839 19810402       2975                20 
    7654 MARTIN    SALESMAN        7698 19810928       1250    1400        30 
    7698 BLAKE     MANAGER         7839 19810501       2850                30 
    7782 CLARK     MANAGER         7839 19810609       2450                10 
    7788 SCOTT     ANALYST         7566 19870419       3000                20 
    7839 KING      PRESIDENT            19811117       5000                10 
    7844 TURNER    SALESMAN        7698 19810908       1500       0        30 
    7876 ADAMS     CLERK           7788 19870523       1100                20 
    7900 JAMES     CLERK           7698 19811203        950                30 
    7902 FORD      ANALYST         7566 19811203       3000                20 
    7934 MILLER    CLERK           7782 19820123       1300                10 

14 rows selected. 

Elapsed: 00:00:00.012
SQL> select value from v$diag_info where name='Default Trace File';

VALUE                                                      
__________________________________________________________ 
/opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_829.trc    

Elapsed: 00:00:00.005

Unfortunately, when checking with ADRCI, there is nothing there:

adrci> show trace /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_829.trc
DIA-48908: No trace files are found

It looks like the first bug in Oracle 23c. Of course, Oracle 23c FREE, which I am using here, is just an XE version and an early version at that. Now, the philosophical part of tracing. Tracing is usually done on the entire session, to see where the time is spent and to locate the SQL that may need to be improved. Why would anyone want to trace a single statement? The reason for tracing a single statement is very simple: sometimes, the problem with SQL is not in the plan, but in the extenuating circumstances, like global cache events in the RAC environment or waiting for a lock in the 2PC environment.

Tracing the particular SQL_ID is usually done very rarely, but when it does need to be done, it’s an extremely useful tool.

Posted in Uncategorized | Leave a comment

Oracle 23c

Oracle has recently made available Oracle 23c for download. The software is characterized as “free edition for developers”. That is what used to be “Oracle XE”:

opyright (c) 1982, 2023, Oracle.  All rights reserved.

Last Successful login time: Sat Apr 15 2023 17:03:04 -04:00

Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0

SQL> alter database datafile 15 resize 16G;
alter database datafile 15 resize 16G
*
ERROR at line 1:
ORA-12954: The request exceeds the maximum allowed database size of 12 GB.

So far, the changes from Oracle 19c aren’t too radical but I still have to learn it.

UPDATE on 04/27/2023:

Since this is XE version of Oracle, it cannot exceed 1.5 GB of SGA. I tested with several values, starting with 4 GB and nothing would work until the magic number 1536 MB. This is really meant for the laptop sized systems and for giving presentations.

Posted in Uncategorized | Leave a comment

Fast Ingest in Oracle 19c

Oracle 19c (19.16 to be precise) has introduced a new feature to speed up inserts. This insert is called “fast ingest” and inserts rows into memory, a special pool defined by the new instance parameter which can only be modified on CDB$ROOT database level, not on the PDB level.

SQL> show parameter memopti
NAME                  TYPE        VALUE 
--------------------- ----------- ----- 
memoptimize_pool_size big integer 1G  

Inserts are flagged as “fast ingest” by using “memoptimize_write” hint. Such inserts are directed into the memory pool defined by the parameter above and then written to disk by the background process. The new feature is described here:

https://blogs.oracle.com/in-memory/post/fast-ingest-updates

The purpose of this post was to determine how practical this new feature is. For that, I tested by inserting 5M rows into a test table. The table that I tested on looks like this:

SQL> desc test_tab


   Name       Null?            Type 
_______ ___________ _______________ 
COL1    NOT NULL    NUMBER(38)      
COL2                VARCHAR2(20)  

Memory optimization hasn’t been turned on yet. Let’s insert 5M rows into the table, using the classic insert. Here is a short PL/SQL snippet I wrote to do that:

set serveroutput on
set timing on
connect scott/tiger@test1
truncate table test_tab reuse storage;
declare
ROWS constant integer := 5000000;
BATCH constant integer := 1024;
ind integer;

begin
for ind in 1..ROWS loop
insert into test_tab values(ind,dbms_random.string('x',20));
if (mod(ind,BATCH) = 0) then
    commit;
end if;
end loop;
commit;
dbms_output.put_line(ROWS||' rows inserted.');
end;
/
exit;

This is a brief FOR loop executing a very simple insert. The script runs for 4 minutes 41.9 seconds:

bash-4.2$ sqlplus /nolog @populate_test_tab_2.sql

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 23 20:04:19 2023
Version 19.18.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.
Connected.
Table truncated.
Elapsed: 00:00:01.32
5000000 rows inserted.
PL/SQL procedure successfully completed.
Elapsed: 00:04:41.90

Before we start doing MEMOPTIMIZE FOR WRITE, let’s test with slightly more complex script, using PL/SQL bulk processing (FORALL):

et serveroutput on
set timing on
connect scott/tiger@test1
truncate table test_tab reuse storage;
declare
ROWS constant integer := 5000000;
BATCH constant integer := 1024;
type data_tab is table of test_tab%rowtype index by binary_integer;
test_arr data_tab:=data_tab();
test_row test_tab%rowtype;
ind binary_integer :=0;
t_ind binary_integer :=0;

procedure clean_arr(t in out nocopy data_tab) is
begin
   for i in t.FIRST..t.COUNT loop
       if (t(i).col1 is NULL) then
           t.delete(i);
       end if;
   end loop;
end;
begin
for ind in 0..ROWS+1 loop
test_row.col1:=ind;
test_row.col2:=dbms_random.string('x',20);
t_ind:=mod(ind,BATCH);
test_arr(t_ind+1):=test_row;
if (t_ind=0 and ind>0) then
   forall i in test_arr.FIRST..test_arr.COUNT
   insert into test_tab values test_arr(i);
   commit;
   test_arr.delete;
end if;
end loop;
clean_arr(test_arr);
if (test_arr.COUNT > 0) then
   forall i in test_arr.FIRST..test_arr.COUNT
   insert into test_tab values test_arr(i);
   commit;
end if;
dbms_output.put_line(ROWS||' rows inserted.');
end;
/

With bulk processing, the time needed to insert 5M rows is noticeably shorter:

bash-4.2$ sqlplus /nolog @populate_test_tab.sql

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 23 20:28:35 2023
Version 19.18.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

Connected.

Table truncated.

Elapsed: 00:00:01.35
5000000 rows inserted.

PL/SQL procedure successfully completed.

Elapsed: 00:01:22.35

Classic PL/SQL bulk processing brings the time down to 1 minute 35 seconds. OK, let’s now do the new stuff and see how much performance does it buy us. First, let’s do the preliminaries:

USER SYSTEM:
SQL> SQL> alter system set memoptimize_pool_size=1G scope=spfile;
System altered.
(Restart needed)
USER SCOTT:
alter table test_tab memoptimize for write;
Table TEST_TAB altered.
Elapsed: 00:00:00.024

Now that the preparatory work is done, let’s write a new copy of the load script:

et serveroutput on
set timing on
connect scott/tiger@test1
truncate table test_tab reuse storage;
declare
ROWS constant integer := 5000000;
BATCH constant integer := 1024;
ind integer;

begin
for ind in 1..ROWS loop
insert /*+ memoptimize_write */ into test_tab values(ind,dbms_random.string('x',20));
if (mod(ind,BATCH) = 0) then
    dbms_memoptimize.write_end;
end if;
end loop;
dbms_memoptimize.write_end;
commit;
dbms_output.put_line(ROWS||' rows inserted.');
end;
/
exit;

The underlined parts are specific for this new feature. Here is the execution:

-bash-4.2$ sqlplus /nolog @populate_test_tab_3.sql

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 23 20:55:22 2023
Version 19.18.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.
Connected.
Table truncated.
Elapsed: 00:00:01.42
5000000 rows inserted.
PL/SQL procedure successfully completed.
Elapsed: 00:03:28.27

The rows were inserted in 3 minutes 28.27 seconds. That is faster than the simple insert but significantly slower than bulk processing using FORALL. So, the lesson here is to rather invest some time into rewriting the load script than using quick and easy solutions. Also, MEMOPTIMIZE FOR WRITE can be problematic with COMMIT or ROLLBACK:

QL> truncate table test_Tab;
Table TEST_TAB truncated.
Elapsed: 00:00:00.083
SQL> insert into test_tab values(1,'Mladen');
1 row inserted.
Elapsed: 00:00:00.005
SQL> insert /*+ memoptimize_write */ into test_tab values(2,'Mladen');
1 row inserted.
Elapsed: 00:00:00.003
SQL> rollback;



Rollback complete.
Elapsed: 00:00:00.006
SQL> select * from test_tab;
   COL1      COL2 
_______ _________ 
      2 Mladen    
Elapsed: 00:00:00.014
SQL> 

The row inserted using memory optimization was not rolled back. The table was truncated and 2 rows were inserted, one with the “memoptimize_write” hint, another one without. That is all a single transaction. However, when the “ROLLBACK” command was issued, only the ordinary “INSERT” command was rolled back. The “fast ingest” version was not rolled back. I haven’t tested the feature on Oracle 21c for two reasons:

  • My employer doesn’t have Oracle 21c. The highest version in the data center is 19c.
  • This post would be far too long with the stuff for Oracle 21c added.

Also, the memory allocated was significantly larger than defined by the parameter:

29512125,29540327,29540831,29774362,29942275,30134746,30160625,30408443,
30534662,30674373,30855101,30889443,30895577,31247838,31306261,31311732,
31359215,31494420,31668872,31727233,31776121,31844357,32032733,32069696,
32124570,32165759,32167592,32523206,32892883,33184467,33223248,33563137,
33805155,33822881,33872610,34149263,34346865,34450366,34461697,34473578,
34604941,34786432
===========================================================
db_recovery_file_dest_size of 16384 MB is 15.98% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
2023-03-23 20:55:24.357000 -04:00
Memoptimize Write allocated 2055M from large pool

My conclusion is that this feature is nice but doesn’t buy me much. The problem with ROLLBACK is rather significant. Another problem, that still needs testing is whether this way of doing insert logged in redo logs. I will need to create an active standby and see whether this data gets written to standby. However, I don’t think I will use this before Oracle 23c.

Posted in Uncategorized | Leave a comment

January 2023 Oracle DBRU for Linux

Oracle DB and GI release upgrade patches for January 2023 have been put on hold. Here is the direct quote from My Oracle Support:

Oracle Database 19c RU, RUR, and BP Jan 2023 Known Issues

My Oracle Support Document ID: 19202301.9

Released: January 17, 2023

This document lists the known issues for Oracle Database / Grid Infrastructure / OJVM Release 19c Jan 2023. These known issues are in addition to the issues listed in the README file for each individual RU, RUR, or BP.

Oracle recommends that you subscribe to this Known Issues NOTE in order to stay informed of any emergent problems.

This document includes the following sections:

  • Section 1, “Known Issues”
  • Section 2, “Modification History”
  • Section 3, “Documentation Accessibility”

1 Known Issues

#Date AddedApplies toIssueFix or WorkaroundMore Details
1.1January 24, 2023DB & GI RU 19.18.0.0.230117 (Linux.x64 only)After applying 19.18 DB and GI RU on Grid and DB homes, the LMON trace file may contain repeated entries of “Submitting asynchronized dump request” every 48 days of operating system uptime. For more details see MOS Note 2923428.1The issue details and the Workaround for this issue see MOS Note 2923428.1. To prevent impact to Database systems, the 19.18 DB and GI Release Update patches for Linux.X64 have been put on hold as of 6pm PST 24-Jan-2023.

Also, this patch reports an error if “startup upgrade” was issued before the database portion of the patch installation. I know that DBRU patches are not supposed to be run in upgrade mode anyway, but I was usually running “datapatch” to install DBRU and OJVM patches together, which means that the database was in the upgrade mode. This is the first time I saw that cause an error. Anyway, January 2023 DBRU and GIRU are no longer available for download and Oracle is working hard on replacing them.

Posted in Uncategorized | Leave a comment

Faster – new book by Cary Millsap

A new book by Cary Millsap, named “Faster” is about to hit the shelves. For those few who don’t know, Cary is the author of “Optimizing Oracle for Performance”, the most revolutionary Oracle tuning book ever. Cary has laid foundation to the event based tuning method, or “method-R”, as he calls it in his book. The previous method was “method-C”, based on BCHR witchcraft. Today largely forgotten, BCHR used to stand for “Buffer Cache Hit Ratio”. The idea of BCHR-based witchcraft was that the goal of the tuning was to maximize the BCHR. The whole thing went downhill when Connor McDonald, a renowned Oracle wizard and one of the people behind the “Ask Tom” web page, created a little procedure which could propel BCHR as high as it was wanted, with largely negative impact to performance. Connor’s script took a small table and accessed it over and over again, until BCHR was higher than the desired value. The impact to performance was negative because this loop was burning lots of CPU, without contributing anything useful to the user.

Cary, on the other hand, had a different idea: let’s start from the application instead of the database and figure out what the application is waiting for. Once the wait point was known, tuning was basically done by trying to either eliminate or reduce wait. Cary has also developed a Perl script which was able to aggregate the SQL Trace file and summarize the waits per SQL. For those who are younger than 40, Perl is a scripting language, superior alternative to Python, obviously too complex for the modern generations of programmers. The simple idea above consists of the two even simpler ideas:

  • We should start from the application. The customer is not interested in the database indicators, the customer is interested in the application performance. That was a big heresy back in the day of the BCHR witchcraft.
  • Let’s see what the application is actually waiting for instead of assuming it is IO. Sometimes, it’s not I/O.

Cary’s “Optimizing Oracle for Performance” remains relevant to this very day because it lays outt the logical foundations for tuning an application running against Oracle, despite the fact that it was written back in the days of Oracle 8i. It was the best of times; It was the worst of times. Cary’s book lays out the logical foundation for tuning almost anything. It is phylosophically very similar to the famous book by Kevin Dowd called “High Performance Computing”. In his book, Kevin Dowd uses the idea of actually using the profiler for measuring performance and seeing in which part of the program is the most of the time spent.

The new book is called “Faster” and, according to Amazon, it will arrive to my mailbox on September the 22nd, 2022. I cannot wait for this new book. I have never met Cary in person but I have communicated with him through email. Knowing him, I think that his book will be well worth the wait.

Posted in Uncategorized | Leave a comment