Conversion national characters to
ASCII with out spaces in
In 10g and 11g I use the “owa_util.get_procedure” function. I normally use this in packages as it will also return the name of an internal procedure or function as part of the package name, i.e. (package_name).(procedure name). I use this to provide a generic
EXCEPTION template for identifying where an exception occured.
Return information about witch module call this code.
Other good example use new features Oracle 10g
Very usefully new features in Oracle 11g is analytic Oracle SQL function
This code find change in address post code in journal table:
I suspect you’re only reporting the last error in a stack like this:
If so, that’s because your package is stateful:
The values of the variables, constants, and cursors that a package declares (in either its specification or body) comprise its package state. If a PL/SQL package declares at least one variable, constant, or cursor, then the package is stateful; otherwise, it is stateless.
When you recompile the state is lost:
If the body of an instantiated, stateful package is recompiled (either explicitly, with the “ALTER PACKAGE Statement”, or implicitly), the next invocation of a subprogram in the package causes Oracle Database to discard the existing package state and raise the exception ORA-04068.
After PL/SQL raises the exception, a reference to the package causes Oracle Database to re-instantiate the package, which re-initializes it…
You can’t avoid this if your package has state. I think it’s fairly rare to really need a package to be stateful though, so you should revisit anything you have declared in the package, but outside a function or procedure, to see if it’s really needed at that level. Since you’re on 10g though, that includes constants, not just variables and cursors.
But the last paragraph from the quoted documentation means that the next time you reference the package in the same session, you won’t get the error and it will work as normal (until you recompile again).
Oracle Database 11g Release 2 (184.108.40.206), Oracle Database treats a package as stateless if its state is constant for the life of a session (or longer). This is the case for a package whose items are all compile-time constants.
A compile-time constant is a constant whose value the PL/SQL compiler can determine at compilation time. A constant whose initial value is a literal is always a compile-time constant. A constant whose initial value is not a literal, but which the optimizer reduces to a literal, is also a compile-time constant. Whether the
PL/SQLoptimizer can reduce a nonliteral expression to a literal depends on optimization level. Therefore, a package that is stateless when compiled at one optimization level might be stateful when compiled at a different optimization level. For information about the optimizer, see “PL/SQL Optimizer”.
Each session that references a package item has its own instantiation of that package. If the package is stateful, the instantiation includes its state. The package state persists for the life of a session, except in these situations:
- The package is
SERIALLY_REUSABLE.For details, see “SERIALLY_REUSABLE Packages”.
- The package body is recompiled.If the body of an instantiated, stateful package is recompiled (either explicitly, with the “ALTER PACKAGE Statement”, or implicitly), the next invocation of a subprogram in the package causes Oracle Database to discard the existing package state and raise the exception
ORA-04068.After PL/SQL raises the exception, a reference to the package causes Oracle Database to re-instantiate the package, which re-initializes it. Therefore, previous changes to the package state are lost. (For information about initialization, see “Package Instantiation and Initialization”.)
- Any of the session’s instantiated packages are invalidated and revalidated.All of a session’s package instantiations (including package states) can be lost if any of the session’s instantiated packages are invalidated and revalidated. For information about invalidation and revalidation of schema objects, see Oracle Database Advanced Application Developer’s Guide.
It sounds like what you want is to be able to list all packages that may potentially have state.
What you’re looking for is just packages that have any global variables or constants. For a single package, this is quite simple by inspection. To look across all packages in a schema, however, you could use PL/Scope:
First, log in as the schema owner, turn on PL/Scope in your session:
Then, recompile all your package bodies.
Then, run this query to find all the variables and constants declared at the package level:
I’d suggest the resulting list of packages will be your target.
If you’re on
11gR2, constants no longer cause this problem, so you’d use this query instead:
Here we are common problem in PLSQL. We try find answer for one common question in programing databases. Is the specyfic record exist in other table or not exist? Some examples below show how to menage this problem:
In line 11 and 14 we have place to insert own code. Other solution we found when we use define explicit cursor:
Though there are two other sections in this chapter that cover REF cursors more fully (“Static REF Cursors” and “Dynamic REF Cursors”), a brief overview regarding these types of cursors warrants mention in this section due to their structure: they must be explicitly opened, fetched from, and closed. If you are returning a result set to a client, a great way to avoid lots of repetitive code is to use a REF cursor. Essentially, the cursor you open usually depends on what input you receive from the requesting client. Listing 10-4 provides a brief example of a REF cursor scenario in PL/SQL:
create table aTable( id number, aClobColumn clob ); insert into aTable values (1,'value' ); insert into aTable values (1,'values are like this' ); select * from aTable where dbms_lob.substr( aClobColumn , 4000, 1 )='value';
select * from my_table where dbms_lob.instr(product_details,'NEW.PRODUCT_NO')>=1
Very usefully website 🙂 for PLSQL developers!
subprogram inlining in 11gThe release of Oracle 10g brought with it the first optimising compiler for PL/SQL. As discussed in this oracle-developer.net article, Oracle added two levels of optimisation to provide some impressive performance gains without changing a line of code.Oracle 11g has taken compiler optimisation further, in particular with the concept of subprogram inlining. With subprogram inlining, Oracle will replace a call to a subroutine (such as a function) with the subroutine code itself during compilation. One of the benefits of this is that we can continue to write well-structured, modular code without any performance penalties. For SQL-intensive PL/SQL programs, the gains from inlining might be marginal, but for procedural code, inlining might provide some greater optimisation benefits, as we will see in this article.
Source: subprogram inlining in 11g
declare k_ascii_a constant pls_integer := ascii('A'); l_index varchar2(1); begin for i in 1 .. 10 loop -- Change 10 for whatever is your limit l_index := chr(k_ascii_A + i - 1); dbms_output.put_line('Index is ' || l_index); end loop; end;
But this is not portable across character sets. I think a better solution is to declare up front your index variables like this:
declare k_index_char constant varchar2(26) := 'ABCDEF...Z'; l_index varchar2(1); begin for i in 1 .. 10 loop -- Change 10 for whatever is your limit l_index := substr(k_index_char, i, 1); dbms_output.put_line('Index is ' || l_index); end loop; end;
That way if you change character sets (e.g. to utf8) you are not reliant on a collating sequence that doesn’t match your char set.
The UNIQUE_SESSION_ID function returns a character string unique to the session among all sessions currently connected to the database. The return string can be up to 24 bytes in length. Multiple calls to the function from the same session will always return the same string. The program header follows:
FUNCTION DBMS_SESSION.UNIQUE_SESSION_ID RETURN VARCHAR2;
UNIQUE_SESSION_ID is functionally identical to the DBMS_PIPE.UNIQUE_SESSION_NAME function; however, their return values are not identical. Be very careful not to write code that assumes that these two functions are equivalent.
The UNIQUE_SESSION_ID function does not raise any exceptions.
To display the value of the unique id for the current session, specify the following:
DECLARE my_unique_id VARCHAR2(30); BEGIN my_unique_id := DBMS_SESSION.UNIQUE_SESSION_ID; DBMS_OUTPUT.PUT_LINE('UNIQUE ID: '||my_unique_id); END;
An example of output from executing the preceding PL/SQL block follows:
UNIQUE ID: F000E4020000
— Source https://www.toadworld.com/platforms/oracle/w/wiki/3246.dbms-session-unique-session-id
When you have problem receive information about session in Oracle PLSQL database for example purposes journalling. Look bellow on SQLs witch rewrite two or more rows:
SELECT * FROM v$session s WHERE username IS NOT NULL AND s.AUDSID = SYS_CONTEXT ('userenv', 'sessionid') ORDER BY username, osuser;
You need add two condition to receive one row.
WHERE username IS NOT NULL AND s.AUDSID = SYS_CONTEXT ('userenv', 'sessionid') AND S.SID = SYS_CONTEXT ('userenv', 'sid')
Final query return unique session row:
SELECT * FROM v$session s WHERE username IS NOT NULL AND s.AUDSID = SYS_CONTEXT ('userenv', 'sessionid') AND S.SID = SYS_CONTEXT ('userenv', 'sid') ORDER BY username, osuser;
Bulk Processing with BULK COLLECT and FORALL· BULK COLLECT: SELECT statements that retrieve multiple rows with a single fetch, improving the speed of data retrieval· FORALL: INSERTs, UPDATEs, and DELETEs that use collections to change multiple rows of data very quicklyPL/SQL is so tightly integrated with the SQL language, you might be wondering why special features would be needed to improve the performance of SQL statements inside PL/SQL. The explanation has everything to do with how the runtime engines for both PL/SQL and SQL communicate with each other—through a context switch.
Bulk binds can improve the performance when loading collections from a queries. The
BULK COLLECT INTO construct binds the output of the query to the collection. To test this create the following table.
CREATE TABLE bulk_collect_test AS SELECT owner, object_name, object_id FROM all_objects;
The following code compares the time taken to populate a collection manually and using a bulk bind.
SET SERVEROUTPUT ON DECLARE TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE; l_tab t_bulk_collect_test_tab := t_bulk_collect_test_tab(); l_start NUMBER; BEGIN -- Time a regular population. l_start := DBMS_UTILITY.get_time; FOR cur_rec IN (SELECT * FROM bulk_collect_test) LOOP l_tab.extend; l_tab(l_tab.last) := cur_rec; END LOOP; DBMS_OUTPUT.put_line('Regular (' || l_tab.count || ' rows): ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk population. l_start := DBMS_UTILITY.get_time; SELECT * BULK COLLECT INTO l_tab FROM bulk_collect_test; DBMS_OUTPUT.put_line('Bulk (' || l_tab.count || ' rows): ' || (DBMS_UTILITY.get_time - l_start)); END; / Regular (42578 rows): 66 Bulk (42578 rows): 4 PL/SQL procedure successfully completed. SQL>
Remember that collections are held in memory, so doing a bulk collect from a large query could cause a considerable performance problem. In actual fact you would rarely do a straight bulk collect in this manner. Instead you would limit the rows returned using the LIMIT clause and move through the data processing smaller chunks. This gives you the benefits of bulk binds, without hogging all the server memory. The following code shows how to chunk through the data in a large table.
SET SERVEROUTPUT ON DECLARE TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE; l_tab t_bulk_collect_test_tab; CURSOR c_data IS SELECT * FROM bulk_collect_test; BEGIN OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 10000; EXIT WHEN l_tab.count = 0; -- Process contents of collection here. DBMS_OUTPUT.put_line(l_tab.count || ' rows'); END LOOP; CLOSE c_data; END; / 10000 rows 10000 rows 10000 rows 10000 rows 2578 rows PL/SQL procedure successfully completed. SQL>
So we can see that with a LIMIT 10000 we were able to break the data into chunks of 10,000 rows, reducing the memory footprint of our application, while still taking advantage of bulk binds. The array size you pick will depend on the width of the rows you are returning and the amount of memory you are happy to use.
From Oracle 10g onward, the optimizing PL/SQL compiler converts cursor FOR LOOPs into BULK COLLECTs with an array size of 100. The following example compares the speed of a regular cursor FOR LOOP with BULK COLLECTs using varying array sizes.
SET SERVEROUTPUT ON DECLARE TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE; l_tab t_bulk_collect_test_tab; CURSOR c_data IS SELECT * FROM bulk_collect_test; l_start NUMBER; BEGIN -- Time a regular cursor for loop. l_start := DBMS_UTILITY.get_time; FOR cur_rec IN (SELECT * FROM bulk_collect_test) LOOP NULL; END LOOP; DBMS_OUTPUT.put_line('Regular : ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk with LIMIT 10. l_start := DBMS_UTILITY.get_time; OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 10; EXIT WHEN l_tab.count = 0; END LOOP; CLOSE c_data; DBMS_OUTPUT.put_line('LIMIT 10 : ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk with LIMIT 100. l_start := DBMS_UTILITY.get_time; OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 100; EXIT WHEN l_tab.count = 0; END LOOP; CLOSE c_data; DBMS_OUTPUT.put_line('LIMIT 100: ' || (DBMS_UTILITY.get_time - l_start)); -- Time bulk with LIMIT 1000. l_start := DBMS_UTILITY.get_time; OPEN c_data; LOOP FETCH c_data BULK COLLECT INTO l_tab LIMIT 1000; EXIT WHEN l_tab.count = 0; END LOOP; CLOSE c_data; DBMS_OUTPUT.put_line('LIMIT 1000: ' || (DBMS_UTILITY.get_time - l_start)); END; / Regular : 18 LIMIT 10 : 80 LIMIT 100: 15 LIMIT 1000: 10 PL/SQL procedure successfully completed. SQL>
You can see from this example the performance of a regular FOR LOOP is comparable to a BULK COLLECT using an array size of 100. Does this mean you can forget about BULK COLLECT in 10g onward? In my opinion no. I think it makes sense to have control of the array size. If you have very small rows, you might want to increase the array size substantially. If you have very wide rows, 100 may be too large an array size.