Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 4 hours 31 sec ago

Regarding the Current Role of PL/SQL in Modern Technology Stacks

4 hours 31 sec ago
Dear Team, I hope this message finds you well. I have been reflecting on the current landscape of PL/SQL and its role in contemporary technology stacks. I would greatly appreciate your insights on a few points that have been on my mind. <b> PLSQL for Business Logic ? </b> While it's widely acknowledged that "as long as there is Oracle, there will be PL/SQL," I am eager to explore forward-looking scenarios where PL/SQL remains a prominent choice for business logic. In today's context, it seems that business logic is predominantly implemented using modern object-oriented languages such as Java or .NET, leveraging features like Streams and Lambda functions. Could you provide examples or use cases where PL/SQL excels and is considered integral, especially in comparison to these object-oriented approaches? <b> PLSQL for Data Engineering ? </b> The ETL landscape has witnessed a significant shift towards technologies like Spark for seamless integration with data warehouses and data lakes. In this evolving scenario, I am curious to understand how PL/SQL continues to play a vital role in ETL processes. Are there specific use cases or examples where PL/SQL is still the preferred choice in modern data engineering stacks? I understand the historical significance of PL/SQL in minimizing network calls and maintaining code proximity to databases, as highlighted in research papers advocating for a thick database approach. However, I am keen to bridge the gap between theoretical advantages and practical implementations. Are enterprise projects aligning with this approach, or is the trend shifting towards business logic predominantly residing in Java/.NET environments? In essence, <b>could you kindly furnish examples and use cases illustrating</b> where PL/SQL stands out as a core, integral component in modern data engineering or application development stacks? <b>I am particularly interested in understanding if PL/SQL is now primarily considered a supplementary or exception-use language, driven by compliance requirements rather than intrinsic value in data movement scenarios.</b> I appreciate your time and insights into this matter, and I look forward to hearing from you soon.
Categories: DBA Blogs

Query taking very long.

Tue, 2024-02-20 10:46
Hi Tom, I'm facing an issue somewhat strange and to which a have no clear answer. The database version that I'm using is 12.1.0.1.0 on Windows 64 (both Standard and EE). With 11G (I believe) Oracle started using the Unified Audit Trail. The default "rules" for my version (12c) inserts a new record in the audit table every time a user connects to the DB. Because these databases are not mine, the size of the tablespace (SYSAUX), was not under surveillance and has grown to a considerable size without anyone noticing it. At the moment the tablespace and table are around 16GB and 14GB. We have tried to remove the information from the table and that process is ongoing. My question is: In the DBs where this table and tablespace have grown to such sizes, any (or many) query run against a object in that tablespace take a huge amount of time. Of course, I know that if a object is very large, then that means it's going to take a long time to read, but here we are talking about a really large amount of time. When I'm executing a query against that tablespace the disk subsystem (SSD disks) starts to read about 130 MBs a second. In a symplistic way, one could say that it should read the necessary 16GB in a bit less than 200 seconds, but the system takes more than 10 minutes (I never allowed it to actually finish because these are PROD systems). What I would like to know is if there is anything specific about objects inside the SYSAUX tablespace (namely the unified audit trail objects and the scheduler job objects) that could explain such a delay in execution while having such a huge disk access (reads). Thank you very much, hugo
Categories: DBA Blogs

High db block gets for inserting into reference partitioned table

Thu, 2024-02-15 01:46
Hello Tom, Could you please advise why I'm getting so huge difference in db block gets and redo for insert between range and reference partitioned table? Db block gets are like 100x more for reference partitioned table and insert is 2-3 times slower. <code> DB01> create table t1 (id number(19) primary key, ts date) 2 partition by range (ts) interval (numtodsinterval(1, 'DAY')) (partition P0001 values less than (to_date('2024-01-01' ,'YYYY-MM-DD'))); Table created. DB01> DB01> insert into t1 (id, ts) values (1, sysdate); 1 row created. DB01> DB01> DB01> -- range interval DB01> create table t2 (id number(19), t1_id number(19) not null, constraint t2_fk foreign key (t1_id) references t1 (id)) 2 partition by range (t1_id) interval (1) (partition values less than (1)); Table created. DB01> set autotrace trace exp stat DB01> insert into t2 (id, t1_id) select level, 1 from dual connect by level <= 2000000; 2000000 rows created. Execution Plan ---------------------------------------------------------- Plan hash value: 1236776825 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | INSERT STATEMENT | | 1 | 2 (0)| 00:00:01 | | 1 | LOAD TABLE CONVENTIONAL | T2 | | | | |* 2 | CONNECT BY WITHOUT FILTERING| | | | | | 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter(LEVEL<=2000000) Statistics ---------------------------------------------------------- 105 recursive calls 51252 db block gets 7237 consistent gets 0 physical reads 147628492 redo size 123 bytes sent via SQL*Net to client 391 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 2000000 rows processed DB01> set autotrace off DB01> commit; Commit complete. DB01> DB01> DB01> -- reference DB01> create table t3 (id number(19), t1_id number(19) not null, constraint t3_fk foreign key (t1_id) references t1 (id)) 2 partition by reference (t3_fk); Table created. DB01> set autotrace trace exp stat DB01> insert into t3 (id, t1_id) select level, 1 from dual connect by level <= 2000000; 2000000 rows created. Execution Plan ---------------------------------------------------------- Plan hash value: 1236776825 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows ...
Categories: DBA Blogs

SQL loader not loading all the needed rows due to new line character and enclosement character

Mon, 2024-02-12 18:06
I have a problem with how SQL loader manage the end of a column value. I was hoping to manage CR LF, the enclosement character and the separator character but it seems I can't find a solution! The data I receive from the .csv file looks like this: <code>"C","I","FLAGS","LASTUPDATEDATE","BOEVERSION","C_OSUSER_UPDATEDBY","I_OSUSER_UPDATEDBY","C_OSUSER_PWF","DESCRIPTION","DURATION","ENDDATE","I_OSUSER_PWF","LASTSTATUSCHA","STARTDATE","DURATIONUNIT","TYPE","STATUS","C_BNFTRGHT_CONDITIONS","I_BNFTRGHT_CONDITIONS","C_CNTRCT1_CONDITION","I_CNTRCT1_CONDITION","EXTBLOCKTYPE","EXTBLOCKDURATIONUNIT","EXTBLOCKDURATION","EXTBLOCKDESCRIPTION","PARTITIONID" "7680","423","PE","2015-07-06 11:42:10","0","1000","1506","","No benefits are payable for a Total Disability period during a Parental or Family-Related Leave, for a Total Disability occurring during this period. ","0","","","","","69280000","69312015","71328000","7285","402","","","","","","","1" "7680","426","PE","2015-07-06 11:42:10","0","1000","1506","","""Means to be admitted to a Hospital as an in-patient for more than 18 consecutive hours. "" ","0","","","","","69280000","69312021","71328000","7285","402","","","","","","","1"</code> My ctl file is as follows: <code>Load Data infile 'C:\2020-07-29-03-04-48-TolCondition.csv' CONTINUEIF LAST != '"' into table TolCondition REPLACE FIELDS TERMINATED BY "," ENCLOSED by '"' ( C, I, FLAGS, LASTUPDATEDATE DATE "YYYY-MM-DD HH24:MI:SS", BOEVERSION, C_OSUSER_UPDATEDBY, I_OSUSER_UPDATEDBY, C_OSUSER_PWF, DESCRIPTION CHAR(1000), DURATION, ENDDATE DATE "YYYY-MM-DD HH24:MI:SS", I_OSUSER_PWF, LASTSTATUSCHA DATE "YYYY-MM-DD HH24:MI:SS", STARTDATE DATE "YYYY-MM-DD HH24:MI:SS", DURATIONUNIT, TYPE, STATUS, C_BNFTRGHT_CONDITIONS, I_BNFTRGHT_CONDITIONS, C_CNTRCT1_CONDITION, I_CNTRCT1_CONDITION, EXTBLOCKTYPE, EXTBLOCKDURATIONUNIT, EXTBLOCKDURATION, EXTBLOCKDESCRIPTION, PARTITIONID)</code> Here is what I tried in the control file: CONTINUEIF LAST != '"' CONTINUEIF THIS PRESERVE (1:2) != '",' "str X'220D0A'" Here is the result I currently have with "CONTINUEIF LAST != '"' <code>Record 2: Rejected - Error on table FNA_FNTFO2.TOLCONDITION, column DESCRIPTION. second enclosure string not present Record 3: Rejected - Error on table FNA_FNTFO2.TOLCONDITION, column C. no terminator found after TERMINATED and ENCLOSED field Table FNA_FNTFO2.TOLCONDITION: 1 Row successfully loaded. 2 Rows not loaded due to data errors. 0 Rows not loaded because all WHEN clauses were failed. 0 Rows not loaded because all fields were null.</code> Is there any way to manage line break and enclosement character in SQL Loader? I dont understand why we can`t change how it sees rows. Instead of seeing a new row when there is a CR LF, can we tell it to concacenate values until the last enclosement character (chr34 in my case) + the separator character (y, in my case) has been seen. I really ho...
Categories: DBA Blogs

Generate java code in SQL/PLUS

Thu, 2024-02-08 22:26
Hi, Tom, How are you. I have below source code: create or replace and compile java source named "Something" as import oracle.sql.*; public class Something { ............... } When i wrote above java source file under SQLPLUS, I got the following errors: ERROR at line 1: ORA-29536: badly formed source: Encountered "<EOF>" at line 1, column 20. Was expecting: ";" ... It complains my ";" after "import oracle.sql.*", so do i need grant some priveleges or others? Thanks ============================ Hi Tom, My database is Oracle 8i, the message under SQLPLUS is: SQL*Plus: Release 8.0.6.0.0 - Production on Fri Jun 22 10:12:30 2001 (c) Copyright 1999 Oracle Corporation. All rights reserved. Connected to: Oracle8i Enterprise Edition Release 8.1.6.1.0 - Production With the Partitioning option JServer Release 8.1.6.0.0 - Production But when i executed the following, i got: dummy@someserver> create or replace and compile java source named 2 "Somthing" 3 as 4 import oracle.sql.*; "Somthing" * ERROR at line 2: ORA-29536: badly formed source: Encountered "<EOF>" at line 1, column 20. Was expecting: ";" ... How should i do? Thanks
Categories: DBA Blogs

Why is plan from xplan different from sql monitor report

Mon, 2024-02-05 02:46
Hi Gurus, I have below query run in oracle 12c. Not sure why the plan I got from DBMS_XPLAN.DISPLAY_CURSOR different from DBMS_SQLTUNE.report_sql_monitor. below is detail. as you can see for some reason in xplan: INS_PT table was read and joined once, but in monitor report: this table was read and joined twice. please share your thoughts. thanks in advance Plan from sql monitor report: <code>SQL Monitoring Report SQL Plan Monitoring Details (Plan Hash Value=2657002414) =================================================================================================================================================================================================================================================== | Id | Operation | Name | Rows | Cost | Time | Start | Execs | Rows | Read | Read | Write | Write | Cell | Mem | Activity | Activity Detail | | | | | (Estim) | | Active(s) | Active | | (Actual) | Reqs | Bytes | Reqs | Bytes | Offload | | (%) | (# samples) | =================================================================================================================================================================================================================================================== | 0 | SELECT STATEMENT | | | | | | 1 | | | | | | | | | | | 1 | TEMP TABLE TRANSFORMATION | | | | | | 1 | | | | | | | | | | | 2 | LOAD AS SELECT | | | | 1 | +4 | 1 | 2 | | | 1 | 32768 | | | | | | 3 | SORT AGGREGATE | | 1 | | 1 | +4 | 1 | 1 | | | | | | | | | | 4 | TABLE ACCESS STORAGE FULL | B_CTL | 1 | 13 | 1 | +4 | 1 | 1 | | | | | | | | | | 9 | LOAD AS SELECT | | | | 1 | +4 | 1 | 2 | | | 15 | 15MB | | | | | | 10 | HASH UNIQUE ...
Categories: DBA Blogs

dealing with the word "group in selecting json values

Mon, 2024-02-05 02:46
I have the following json string in mytbl.json_data column: {"resourceType":"QuestionnaireResponse","extension":[{"url":"ppp","valueCode":"en"}],"identifier":{"value":"222222"},"status":"completed","subject":{"reference":"Patient/12345"},"authored":"2024-01-17T20:13:46+00:00","group":{"linkId":"root_group","title":"Demographics","question":[{"linkId":"104573","text":"What is the highest grade of school you have completed","answer":[{"valueInteger":2}]},{"linkId":"333","text":"What describes your current marital status? ","answer":[{"valueInteger":1}]}]}} When I query the "group" field I get null: <code> SELECT s.json_data.group FROM mytbl s; </code> When I rename "group" field to"group_1" I get the correct value. I need to be able to use "group" field as this is what we get from the vendor. How can I do it? I am using SQL Developer. Thank you!!!
Categories: DBA Blogs

Autonomous transactions and commits

Mon, 2024-02-05 02:46
Hi We are using autonomous transactions for logging and debugging purposes. In PL/SQL code there are calls to packages, which are logging information using autonomous transactions. Are autonomous transactions using commit point optimization ? Are there waits happening for writing data to redo logs? Do commit write options (write/nowait or immediate/batch) have any relevance when using autonomous transactions ? If autonomous transactions are used for solely for debugging purposes, which are best parameters in commit regarding performanc ? lh
Categories: DBA Blogs

How does one convert a column of values of different timezone to a common timezone?

Thu, 2024-01-25 10:06
I have a column with different timezone data. I need to convert every row to pst. ---------------------- | COLUMN | ---------------------- |01/17/2024 18:00 PST| |01/16/2024 18:00 CST| |01/18/2024 12:00 IST| |01/18/2024 07:00 -05| |01/16/2024 14:00 -05| |01/17/2024 18:00 IST| |01/18/2024 17:00 IST| |01/17/2024 16:00 GMT| |01/18/2024 14:00 EST| |01/17/2024 16:00 -05| ----------------------
Categories: DBA Blogs

ORA-12569: TNS:Packet Checksum Failure - How is this error even possible?

Thu, 2024-01-25 10:06
Hello masters. Here I am again with another mystery from our favorite DBMS (no, that was not sarcasm, I do love Oracle in fact). So, in my company we are facing the referred error, ORA-12569: TNS:Packet Checksum Failure While Trying To Connect Through Client. The message itself is kind of self explanatory, apparently a tcp package failed the checksum. There's a document for this error specifically, Doc ID 257793.1, which states it plain clear: <code>There is a mismatch in the header of the tcp packet between the client and the server. In other words, the "Packet Header" leaves the DB server but by the time the client gets the packet on the other end, the header has changed (or visa versa).</code> The doc even gives an exemple of captured packages with Net tracing from both the server and the client, demonstrating a corruption on the tcp header. And the proposed solution does not involve the database/client itself, but rather analysis on the network: <code>Check what components may be interfering with TCP Packet information, such as Firewalls or other NIPS (Network Intrusion Prevention Systems). Contact the Systems / Network Administrators for this and have them fix the underlying tcp/ip packets problem or disable any "SQL altering" programs / settings. </code> Well, all this make kind of make sense to me. But then after thinking a little about the situation, something struck me: the oracle client (the application on the higher level) should not receive any bad tcp packages because they are checked by network interface (the transmission itself on the lower level). I mean, the Transmission Control Protocol has it own checksum at the transport layer; before seding the datagram to the application layer, the transport layer validates the package and in case of errors it requests the client to resend that specific bad datagram. How is it possible that corrputed tcp packets arrive to the aplication level? I'm not a network specialist, but this situation (and the Oracle error), is really puzzling me. So riddle me this, masters: how come the oracle client detects a bad tcp package when the network level doesn't???
Categories: DBA Blogs

Use nonprefix local index

Thu, 2024-01-25 10:06
Tom, Could you please explain for which scenario I should consider non-prefix local index over prefix local index? Thank you very much for you time.
Categories: DBA Blogs

Sessions waiting forever on SQL*Net message from db link but no session in link target db

Thu, 2024-01-25 10:06
Dear AskTom-Team, one of my customers has multiple Standard edition database locations worldwide (different continents) and data is transferred via database links in a PL/SQL fashion via job scheduler framework. There is one "management db", which is used as scheduler and this db has links to all databases. It then performs DML statements like: INSERT INTO table@target_db SELECT ... FROM table@source_db WHERE ... ; It turns out that every couple of weeks, a session is hung waiting on "SQL*Net message from db link" with SECONDS_IN_WAIT increasing but SEQ# staying. When I check the sessions on the database, where the link is pointing to, I do not see a session. Both databases (Windows) have SQLNET.EXPIRE_TIME set to 10. SQL_ID only shows the top level PL/SQL scheduler call, but not the actual statement. I am having a hard time troubleshooting this any further. Enabling SQL*Net tracing would be one option, but the issue only happens sporadically. 1) Do you recommend to adapt TCP send/receive windows in tnsnames/listener.ora's? 2) Is there sample code on how to handle exceptions regarding distributed transactions? Is there a way to let the PL/SQL code break out from the SQL*Net message from db link wait? 3) Would it be useful to check db link response with a dummy query (select null from dual@<target|source>_db) and only perform the DML statmeent if this was successful? 4) Should we proactively close the db link after use and if yes, after every query or at the end of the job? Best regards, Martin
Categories: DBA Blogs

String-splitting with not all values present

Wed, 2024-01-24 15:46
Hello. This is a variation on the string-splitting examples herein. In this case, I have strings in a column with a consistent delimiter (comma in this example), but not all values are present. I would like the values to be split into separate columns, which is why I have not tried to employ the example code to do with turning these values into rows. This does not work as it omits the null values, which I need: <code> WITH input_string AS ( SELECT 'Jagjit,,Jane,Harinder,,Alice' AS my_string FROM dual ) SELECT REGEXP_SUBSTR(my_string, '[^,]+', 1, 1) Name1 , REGEXP_SUBSTR(my_string, '[^,]+', 1, 2) Name2 , REGEXP_SUBSTR(my_string, '[^,]+', 1, 3) Name3 , REGEXP_SUBSTR(my_string, '[^,]+', 1, 4) Name4 , REGEXP_SUBSTR(my_string, '[^,]+', 1, 5) Name5 , REGEXP_SUBSTR(my_string, '[^,]+', 1, 6) Name6 FROM input_string ; NAME1 NAME NAME3 NAME4 NAME5 NAME6 ------ ---- -------- ----- ----- ----- Jagjit Jane Harinder Alice </code> I can get this method to work by artificially inserting a character in the null strings such as in this case: <code> WITH input_string AS ( SELECT 'Jagjit, ,Jane,Harinder, ,Alice' AS my_string FROM dual ) SELECT TRIM(REGEXP_SUBSTR(my_string, '[^,]+', 1, 1)) Name1 , TRIM(REGEXP_SUBSTR(my_string, '[^,]+', 1, 2)) Name2 , TRIM(REGEXP_SUBSTR(my_string, '[^,]+', 1, 3)) Name3 , TRIM(REGEXP_SUBSTR(my_string, '[^,]+', 1, 4)) Name4 , TRIM(REGEXP_SUBSTR(my_string, '[^,]+', 1, 5)) Name5 , TRIM(REGEXP_SUBSTR(my_string, '[^,]+', 1, 6)) Name6 FROM input_string ; NAME1 NAME2 NAME NAME4 NAME5 NAME6 ------ ----- ---- -------- ----- ----- Jagjit Jane Harinder Alice </code> However, I am sure there is a better way! I just cannot figure it out. I do realise that the '[^,]' means look for the absence of the delimiter. Thanks in advance.
Categories: DBA Blogs

Coalesce Behavior

Wed, 2024-01-24 15:46
Here's the simplest test case I could come up with that demonstrates what I'm seeing. The actual query I'm trying to write is against a couple v$ views, and livesql doesn't seem to come with privileges on those. <code> create table mytab ( name varchar2(30), value varchar2(60) ); insert into mytab values( 'home_phone', null ); insert into mytab values( 'cell_phone', '867-5309' ); commit; select coalesce( home.value, cell.value, 'None' ) from ( select value from mytab where name = 'home_phone' ) home, ( select value from mytab where name = 'cell_phone' ) cell; select coalesce( home.value, cell.value, 'None' ) from ( select value from mytab where name = 'bad_phone' ) home, ( select value from mytab where name = 'cell_phone' ) cell; </code> The first query behaves exactly as expected; because the value for home_phone is null and the value for cell_phone isn't, it returns the value for cell_phone. The second one is the one that threw me - I would expect if there are no rows matching, for "bad_server" coalesce would move to the second result and return the value for cell_phone again. So my question is two-fold: Is this expected and correct behavior? I think there's an argument to be made that it doesn't match the sorta intuitive understanding most people will have of "returns the first non-null expression". And if this is correct, what's the best way to write a query that will return the value for home_phone if there's a row that matches, and moves on to cell_phone if there isn't a home_phone record?
Categories: DBA Blogs

How to copy a package

Tue, 2024-01-23 03:06
I would like to copy an existing package and create a new package. copy RPT and create RPT2. Can you please show me how to do that? Thank you for your support. Hiroki
Categories: DBA Blogs

How can I modify the password for the frmsal.jar file (TrustStore ), replacing the default password (changeit)?

Tue, 2024-01-23 03:06
How can I modify the password for the frmsal.jar file (TrustStore ), replacing the default password (changeit)? Is it recommended to change the password as the frmsal.jar file is provided by Oracle?
Categories: DBA Blogs

Bracket expression in Oracle regular expression

Wed, 2024-01-17 18:46
Hello Tom, I am learning oracle regular expressions. I am using Oracle 12c. I understand that a bracket expression containing a series of characters returns the first character in the input string. Thus, select regexp_substr('123 Oracle', '[abc]') from dual returns the chracter 'a'. What if bracket expressions are nested? select regexp_substr('123 Oracle', '[[abc]]') from dual returns NULL indicating no match. But Why? How does oracle interpret this regular expression? Within the outer [] brackets, I expect the inner [] brackets to be treated as literals. So the output should be the first character among the list [ a b c ] to be matched. Once again regexp_substr must return 'a'. Why is NULL returned instead? A variation on this: select regexp_substr('[]123 Oracle', '[[abc]]') from dual returns '[]'. And I thought that a bracketed expression would always return a single character. Any suggestions? Thanks Geetha
Categories: DBA Blogs

Verifying backups

Wed, 2024-01-17 18:46
Hi Tom. I'm a great believer that a backup you haven't tested isn't a backup, so I'm trying to ensure that our RMAN backups get automatically verified, however I've come across some weird behaviour with rman backup validation. My aim is to run full verification of our backups on a regular basis using RMAN and send an email to someone if it finds any errors. I've been testing it out and found some strange behaviour which I'd like to understand I start by creating a full backup of the database using RMAN. We use incrementally updated image copies so I'm using "backup incremental level 1 for recover of copy with tag 'database' database;". Once I've performed the backup 1. If I run "restore database validate" and "restore archivelog all validate" it (unsurprisingly) says all is fine 2. If I hex edit one of the data files and run the same commands, it picks up the corruption and returns an error code to the command prompt 3. If I hex edit one of the archive logs and run the same commands, it picks up the corruption and returns an error code to the command prompt 4. If I delete one of the archive logs and run the same commands, it picks up the missing file and returns an error code to the command prompt However, if I delete one of the data file copies, RMAN shows an error on screen but then says "datafile X will be created automatically during restore operation" and does not return an error code to the command prompt. See RMAN log below <code> channel ORA_DISK_1: scanning datafile copy C:\BACKUPS\MYDB\E812A6F4CE05460DBEABDF29CB7A5D6F\DATAFILE\O1_MF_UNDOTBS1_LSJCL5G5_.DBF channel ORA_DISK_1: scanning datafile copy C:\BACKUPS\MYDB\E812A6F4CE05460DBEABDF29CB7A5D6F\DATAFILE\O1_MF_TBS1_LSJCDDKS_.DBF channel ORA_DISK_1: scanning datafile copy C:\BACKUPS\MYDB\E812A6F4CE05460DBEABDF29CB7A5D6F\DATAFILE\O1_MF_TBS1_LSJCDVOT_.DBF channel ORA_DISK_1: scanning datafile copy C:\DBACKUPS\MYDB\C3\E812A6F4CE05460DBEABDF29CB7A5D6F\DATAFILE\O1_MF_TBS1_LSJCHCYH_.DBF ORA-19505: failed to identify file "C:\BACKUPS\MYDB\E812A6F4CE05460DBEABDF29CB7A5D6F\DATAFILE\O1_MF_TBS1_LSJCHCYH_.DBF" ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified. ORA-19600: input file is datafile copy 0 (C:\BACKUPS\MYDB\E812A6F4CE05460DBEABDF29CB7A5D6F\DATAFILE\O1_MF_TBS1_LSJCHCYH_.DBF) channel ORA_DISK_1: scanning datafile copy C:\BACKUPS\MYDB\E812A6F4CE05460DBEABDF29CB7A5D6F\DATAFILE\O1_MF_TBS1_LSJCG8VQ_.DBF channel ORA_DISK_1: scanning datafile copy C:\BACKUPS\MYDB\E812A6F4CE05460DBEABDF29CB7A5D6F\DATAFILE\O1_MF_TBS1_LSJCC7MK_.DBF failover to previous backup datafile 13 will be created automatically during restore operation Finished restore at 05-JAN-24 </code> So my questions are: 1. Why does RMAN not consider a completely missing data file a serious enough error in backup validation to throw an error? The documentation says "The main purpose of RMAN validation is to check fo...
Categories: DBA Blogs

Unloading tables without enclosing quotes and with row terminators

Wed, 2024-01-17 00:26
I'm looking to use SQL Developer's SQLcl (v.23.1) to download some Oracle 19c tables to csv files. I tried using UNLOAD with the following formatting: set loadformat delimited delimiter <d> enclosures off column_names off row_terminator <L> Using the UNLOAD command though has drawbacks such as not being able to specify the output file name or to limit the rows being downloaded. I therefore tried using SPOOL since it lets me specify the output file name and limit the rows I want to download with the following formatting: set sqlformat delimited <d> This, however, has its own drawbacks such as not being able to turn off enclosing quotes or to specify a row terminator (as far as I can tell). Is there a way to get SPOOL to work without enclosing quotes and with a row terminator? Something like this: set sqlformat delimited <d> enclosures off row_terminator <L> I appreciate any assistance you can provide. Thanks.
Categories: DBA Blogs

Does Centrally Managed Users actually "CREATE" the database user account?

Wed, 2024-01-17 00:26
I just viewed Russ Lowenthal's youtube video (https://www.youtube.com/watch?v=FU8UGLs8QGU) regarding "Authenticate Oracle Database users with MS Active Directory." In the last and summarizing slide, he made the following statement: <i>Once CMU is configured, ALL user management can be done in Active Directory [including] -Create New Users . . . etc</i> Yet in all the documentation, and his example, it sounds like you cannot just create a user in AD. It seems you must also go to the database (as a DBA) and run something like "CREATE USER JSMITH IDENTIFIED EXTERNALLY....." <i>before</i> the user can actually connect to the database. In other words, for direct (not shared) accounts, CMU will <b>NOT</b> create the account in the database, that is still a manual step by the DBA beforehand. Do I understand that correctly? In which case his summarizing statement would be an overstatement...? Thank you so much!
Categories: DBA Blogs

Pages