Feed aggregator

TCPS Connection With an Oracle Instant Client

Pete Finnigan - 1 hour 5 min ago
All of our products ( PFCLScan , PFCLCode , PFCLObfuscate and http://www.petefinnigan.com/products/pfclforensics.htm) can use an Oracle instant client to connect to the target database(s) or even a full client. It is of course simpler to use an instant client if....[Read More]

Posted by Pete On 27/11/20 At 03:56 PM

Categories: Security Blogs

Local Web-App (ASP.NET Core with EF Core works flawlessly) Publishing to azure

Tom Kyte - 1 hour 5 min ago
When I've published the ASP.NET Core Web App to Azure I get the following Error when trying to use the oracle database. <code> NetworkException: ORA-12154: TNS:could not resolve the connect identifier specified OracleInternal.Network.AddressResolution..ctor(string TNSAlias, SqlNetOraConfig SNOConfig, Hashtable ObTnsHT, Hashtable ObLdpHT, string instanceName, ConnectionOption CO) OracleException: ORA-12154: TNS:could not resolve the connect identifier specified OracleInternal.ConnectionPool.PoolManager<PM, CP, PR>.Get(ConnectionString csWithDiffOrNewPwd, bool bGetForApp, OracleConnection connRefForCriteria, string affinityInstanceName, bool bForceMatch)</code> I fix these errors by setting TNS_ADMIN & WALLET_LOCATION in my Application settings (ENV) I have also set WEBSITE_LOAD_USER_PROFILE Then I get a new error when trying to run Oracle DB code. An unhandled exception occurred while processing the request. OracleException: Connection request timed out OracleInternal.ConnectionPool.PoolManager<PM, CP, PR>.Get(ConnectionString csWithDiffOrNewPwd, bool bGetForApp, OracleConnection connRefForCriteria, string affinityInstanceName, bool bForceMatch) This one I can't seem to figure out. Does anyone have any pointers or things I can try ? My connection string looks like this and works perfectly on the local setup. <code>"User Id=user;Password=password;Data Source=db202011081718_medium;"</code> and is injected here in startup.cs <code> services.AddDbContext<ApplicationDbContext>(options => options.UseOracle( Configuration.GetConnectionString("OracleDBContextConnection"))); </code> I must be missing something, related to the azure/oracle specifics that aren't playing nice I've watched this video 3/4 times https://www.youtube.com/watch?v=-IkDuJy30mY
Categories: DBA Blogs

Delete 40%(4TB out of 10TB) of unwanted data using Partitioning techniques from Oracle db(11g) and reclaim space.

Tom Kyte - 1 hour 5 min ago
Problem statement: Delete 40%(4TB out of 10TB) of unwanted data using Partitioning techniques from Oracle db(11g) and reclaim space. Details: My application DB schema size 10 TB. Oracle 11g and the schema is co located with many other schema's, hence any option with DB down time is allowed. ? Top tables based on size will start from 3 TB (Table_1), 2 TB(Table_2), so on. ? Planned to delete 40% of unwanted data from each table(with a where criteria). ? Delete scripts are practically not possible on huge table data, as it is causing DB sever hung AND impacting other schema services. Below are not an options due to downtime constraints and No FS to create temp tables and switch back. ? Removing data by creating a table ? Switch the tables over(CTAS) Strategy for deletion proposed: ? Partitioning (split partition / sub-portioning) of these tables based on data patterns(where clause) to separate. ? After separating purge eligible data, truncate it in back ground (during off business hrs). --> drop / truncate unwanted partitions. ? After truncating the partition, make the table un-partitioned back to original. Reclaiming of space: best option to choose ? 1. alter database datafile xxx.dbf resize 2. alter tablespace xxx coalesce 3. export-import 4. dbms_redefinition 5. alter table xxx shrink Any guidance on the proposed solution ??? ? ? what partitions to be used for a pattern that has joins/where clause that includes 3 to 4 tables ? ? Pattern based partitioning examples and is that supported in 11g ? ? How to un-partition the main table back to original after truncating the unwanted partitions ?
Categories: DBA Blogs

Cannot read external table or read using utl_file.get_line on Windows 10 with Oracle 18c

Tom Kyte - 1 hour 5 min ago
I just upgraded to Oracle 18c XE from 11g. I have an external table that worked fine on 11g, but I keep getting the following errors on 18c. ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-11604: no directory object specified for log file The directory does exist and I have the correct grants. <code>SELECT * FROM all_directories WHERE directory_name = 'MYDIR';</code> OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID SYS MYDIR C:\Users\sneuf\OneDrive\PLSQL\OracleDirectory 1 <code>SELECT * FROM all_tab_privs WHERE table_name = 'MYDIR' AND grantee = 'C##_SNEUF';</code> GRANTOR GRANTEE TABLE_SCHEMA TABLE_NAME PRIVILEGE GRANTABLE HIERARCHY COMMON TYPE INHERITED SYS C##_SNEUF SYS MYDIR EXECUTE YES NO NO DIRECTORY NO SYS C##_SNEUF SYS MYDIR READ YES NO NO DIRECTORY NO SYS C##_SNEUF SYS MYDIR WRITE YES NO NO DIRECTORY NO I'm pretty sure I'm missing a grant somewhere, but I can't figure out what. Here is my table: <code>CREATE TABLE C##_SNEUF.CHECKING_TBL_EXT2 ( DB_KEY NUMBER, CHECK_NUM VARCHAR2(10), TRANS_DATE TIMESTAMP (6), DESCRIPTION VARCHAR2(100), DEPOSIT_WITHDRAWAL VARCHAR2(1), AMOUNT VARCHAR2(12), MEMO VARCHAR2(200) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY MYDIR ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE BADFILE MYDIR: 'checking.bad' FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM MISSING FIELD VALUES ARE NULL ( DB_key CHAR, check_num CHAR(10), trans_date CHAR(21) DATE_FORMAT DATE MASK 'MM/DD/YYYY HH24:MI:SS', description CHAR(100), deposit_withdrawal CHAR(1), amount CHAR(12), memo CHAR(200) ) ) LOCATION ( MYDIR: 'checking.csv' ) ) REJECT LIMIT UNLIMITED ;</code> Thanks, Steve
Categories: DBA Blogs

Documentum DM_SYSOBJECT_E_NO_BROWSE_ACCESS No browse access for sysobject with ID

Yann Neuhaus - Thu, 2020-11-26 06:50

When upgrading you may get this specific issues if you are trying to edit content or create new documents. We got it when creating a new document in a specific folder, D2 told us that the folder doesn’t exists and that it had no browse access.

Foolishness! The folder does exist and all accesses are set properly so what happened? Here is the full log:

Caused by: com.documentum.fc.client.DfIdNotFoundException: [DM_API_E_EXIST]error:  "Folder specified by 0b010ba18005277d does not exist."
        at com.documentum.fc.client.impl.session.Session.getFolderBySpecification(Session.java:1289)
        at com.documentum.fc.client.impl.session.SessionHandle.getFolderBySpecification(SessionHandle.java:835)
        at com.emc.smartwave.web.servlets.listeners.properties.D2CreationServicePlugin.printParentFolder(D2CreationServicePlugin.java:151)
        at com.emc.smartwave.web.servlets.listeners.properties.D2CreationServicePlugin.formerOnAfter(D2CreationServicePlugin.java:187)
        at com.emc.smartwave.web.servlets.listeners.properties.D2CreationServicePlugin.createProperties(D2CreationServicePlugin.java:133)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.callPlugin(InjectSessionAspect.java:487)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.ajc$inlineAccessMethod$com_emc_d2fs_dctm_aspects_InjectSessionAspect$com_emc_d2fs_dctm_aspects_InjectSessionAspect$callPlugin(InjectSessionAspect.java:1)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.process(InjectSessionAspect.java:130)
        at com.emc.d2fs.dctm.web.services.create.D2CreationService.createProperties(D2CreationService.java:97)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.callPlugin(InjectSessionAspect.java:487)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.ajc$inlineAccessMethod$com_emc_d2fs_dctm_aspects_InjectSessionAspect$com_emc_d2fs_dctm_aspects_InjectSessionAspect$callPlugin(InjectSessionAspect.java:1)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.process(InjectSessionAspect.java:236)
        ... 55 common frames omitted
Caused by: com.documentum.fc.common.DfException: [DM_SYSOBJECT_E_NO_BROWSE_ACCESS]error:  "No browse access for sysobject with ID '0b010ba18005277d'."
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getException(DocbaseMessageManager.java:137)
        at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.checkForMessages(NetwiseDocbaseRpcClient.java:329)
        at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.applyForObject(NetwiseDocbaseRpcClient.java:672)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection$8.evaluate(DocbaseConnection.java:1382)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.evaluateRpc(DocbaseConnection.java:1141)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.applyForObject(DocbaseConnection.java:1374)
        at com.documentum.fc.client.impl.docbase.DocbaseApi.parameterizedFetch(DocbaseApi.java:107)
        at com.documentum.fc.client.impl.objectmanager.PersistentDataManager.fetchFromServer(PersistentDataManager.java:191)
        at com.documentum.fc.client.impl.objectmanager.PersistentDataManager.getData(PersistentDataManager.java:82)
        at com.documentum.fc.client.impl.objectmanager.PersistentObjectManager.getObjectFromServer(PersistentObjectManager.java:355)
        at com.documentum.fc.client.impl.objectmanager.PersistentObjectManager.getObject(PersistentObjectManager.java:311)
        at com.documentum.fc.client.impl.session.Session.getObject(Session.java:965)
        at com.documentum.fc.client.impl.session.Session.getFolderBySpecification(Session.java:1281)
        ... 74 common frames omitted

The issue resolution is quite simple, it’s just a cache problem.

You only need to force clearing the caches on the clients. There’s a simple step to do this. The content server is keeping a reference on the “client version” in the dm_docbase_config object. We only need to increment its value to force all clients to reload and clear the caches. So here is how you do it:

In DQL:

retrieve,c,dm_docbase_config
set,c,l,client_pcaching_change
7
save,c,l
reinit,c

Just increase the client_pcaching_change number by 1.

Cet article Documentum DM_SYSOBJECT_E_NO_BROWSE_ACCESS No browse access for sysobject with ID est apparu en premier sur Blog dbi services.

Row sizes 3

Jonathan Lewis - Thu, 2020-11-26 06:08

Several years ago I published a couple of examples of scripts that I’d been using since Oracle 6 (with minor changes over time) to analyse the content of tables in some detail. The first was a breakdown of the lengths of the rows in the table, the second was a map showing the distribution of nulls in the rows. There used to be a third component of the analysis which produced a report of the number of non-null entries for each column in the table,  but by the time I published the first two parts there didn’t seem to be much point in publishing the third since you could get a sufficiently accurate picture by querying view user_tab_columns after gathering stats:


break on num_rows

select
        tab.num_rows, col.column_name , col.num_nulls
from 
        user_tables             tab
join
        user_tab_columns        col
on
        col.table_name = tab.table_name
where
        tab.table_name = 'T1' 
order by 
        col.num_nulls desc, col.column_id
/


  NUM_ROWS COLUMN_NAME           NUM_NULLS
---------- -------------------- ----------
     56777 EDITION_NAME              56777
           CREATED_APPID             56777
           CREATED_VSNID             56777
           MODIFIED_APPID            56777
           MODIFIED_VSNID            56777
           SUBOBJECT_NAME            56570
           DATA_OBJECT_ID            55353
           DEFAULT_COLLATION         51058
           EDITIONABLE               40216
           OWNER                         0
           OBJECT_NAME                   0
           OBJECT_ID                     0
           OBJECT_TYPE                   0
           CREATED                       0
           LAST_DDL_TIME                 0
           TIMESTAMP                     0
           STATUS                        0
           TEMPORARY                     0
           GENERATED                     0
           SECONDARY                     0
           NAMESPACE                     0
           SHARING                       0
           ORACLE_MAINTAINED             0
           APPLICATION                   0
           DUPLICATED                    0
           SHARDED                       0


In this example t1 is a copy of view all_objects and I’ve listed the columns in order of most frequently null with the table’s num_rows reported at the top for comparison purposes. If you were going to do something that made it a good idea to rebuild the table you could use this report as a guide to re-arranging the column order to position the most frequently null columns towards the end of the row (saving the “trailing nulls take no space” length bytes).

This approach depends on the stats being up to date and accurate, of course, so if you didn’t have them, and didn’t want to collect them, another strategy would be to run a query like the following:


select
        to_char(count(OWNER),'999,999,999,999,999') OWNER,
        to_char(count(OBJECT_NAME),'999,999,999,999,999') OBJECT_NAME,
        to_char(count(SUBOBJECT_NAME),'999,999,999,999,999') SUBOBJECT_NAME,
        to_char(count(OBJECT_ID),'999,999,999,999,999') OBJECT_ID,
        to_char(count(DATA_OBJECT_ID),'999,999,999,999,999') DATA_OBJECT_ID,
        to_char(count(OBJECT_TYPE),'999,999,999,999,999') OBJECT_TYPE,
        to_char(count(CREATED),'999,999,999,999,999') CREATED,
        to_char(count(LAST_DDL_TIME),'999,999,999,999,999') LAST_DDL_TIME,
        to_char(count(TIMESTAMP),'999,999,999,999,999') TIMESTAMP,
        to_char(count(STATUS),'999,999,999,999,999') STATUS,
        to_char(count(TEMPORARY),'999,999,999,999,999') TEMPORARY,
        to_char(count(GENERATED),'999,999,999,999,999') GENERATED,
        to_char(count(SECONDARY),'999,999,999,999,999') SECONDARY,
        to_char(count(NAMESPACE),'999,999,999,999,999') NAMESPACE,
        to_char(count(EDITION_NAME),'999,999,999,999,999') EDITION_NAME,
        to_char(count(SHARING),'999,999,999,999,999') SHARING,
        to_char(count(EDITIONABLE),'999,999,999,999,999') EDITIONABLE,
        to_char(count(ORACLE_MAINTAINED),'999,999,999,999,999') ORACLE_MAINTAINED,
        to_char(count(APPLICATION),'999,999,999,999,999') APPLICATION,
        to_char(count(DEFAULT_COLLATION),'999,999,999,999,999') DEFAULT_COLLATION,
        to_char(count(DUPLICATED),'999,999,999,999,999') DUPLICATED,
        to_char(count(SHARDED),'999,999,999,999,999') SHARDED,
        to_char(count(CREATED_APPID),'999,999,999,999,999') CREATED_APPID,
        to_char(count(CREATED_VSNID),'999,999,999,999,999') CREATED_VSNID,
        to_char(count(MODIFIED_APPID),'999,999,999,999,999') MODIFIED_APPID,
        to_char(count(MODIFIED_VSNID),'999,999,999,999,999') MODIFIED_VSNID,
        to_char(count(*),'999,999,999,999,999') row_count 
fromi
         t1
;

You don’t really need the to_char() function, but it’s a conveience for what I’m going to do with the SQL. Obviouslty it would be a bit tedious to create this statement by hand so, as I did in the previous “rowsize” notes, I’ve written some code to generate it for me:


rem
rem     row_size_2b.sql
rem     
rem     Generate SQL to report counts of 
rem     non-null columns in a table.
rem

set linesize 32000
set feedback off
define m_table = '&1'

declare
        m_string        varchar2(32000) := 'select ';
        m_cursor        sys_refcursor;

begin
        for r in (
                select 
                        column_name, data_type
                from    user_tab_columns
                where   table_name = upper('&m_table')
        ) loop
                m_string := m_string ||  
                                ' to_char(count(' || trim(r.column_name) || '),''999,999,999,999,999'') ' ||
                                trim(r.column_name) || ',' || chr(10) ;
        end loop;

        m_string := m_string || ' to_char(count(*),''999,999,999,999,999'') row_count from &m_table';

--      dbms_output.put_line(m_string);
        print_table(m_string);

end;
/

set linesize 156

The script accepts a table name from the user’s schema (you could edit this to query dba_tab_columns), constructs a string (as shown above – though I’ve done a little cosmetic work on it), and then passes the resulting string to a verion of Tom Kyte’s print_table() routine which produces the following output:


OWNER                         :               56,777
OBJECT_NAME                   :               56,777
SUBOBJECT_NAME                :                  207
OBJECT_ID                     :               56,777
DATA_OBJECT_ID                :                1,424
OBJECT_TYPE                   :               56,777
CREATED                       :               56,777
LAST_DDL_TIME                 :               56,777
TIMESTAMP                     :               56,777
STATUS                        :               56,777
TEMPORARY                     :               56,777
GENERATED                     :               56,777
SECONDARY                     :               56,777
NAMESPACE                     :               56,777
EDITION_NAME                  :                    0
SHARING                       :               56,777
EDITIONABLE                   :               16,561
ORACLE_MAINTAINED             :               56,777
APPLICATION                   :               56,777
DEFAULT_COLLATION             :                5,719
DUPLICATED                    :               56,777
SHARDED                       :               56,777
CREATED_APPID                 :                    0
CREATED_VSNID                 :                    0
MODIFIED_APPID                :                    0
MODIFIED_VSNID                :                    0
ROW_COUNT                     :               56,777

-----------------

1 rows selected

As with many of the scripts I’ve pubished, the task is one I rarely do, so I haven’t bothered to add in all the extra little refinements that might make it foolproof and totally self-contained. In this case, for example I’d have to go through a final couple of manual steps to edit the output by deleteing the last few lines then (since I use vi) issuing “!%!sort -k3 -n -r” which would sort the results in reverse order, numerically, according to the third field.

 

print_table()

Jonathan Lewis - Thu, 2020-11-26 03:40

Many years ago Tom Kyte published a small PL/SQL procedure to output each row in a table as a list of (column_name , value). I often use the original version, even though Tom refined it several times over the years. From time to time the links in my blog that I’ve set up to point to Tom’s code have simply stopped working so I’ve decided to publish a reference copy on my blog in case newer readers need to find a copy and the current link above stops  working.

rem
rem     Script:         print_table.sql
rem     Dated:          Nov 2001
rem     Author:         Tom Kyte - from Expert One on One Oracle.
rem
rem     Purpose:
rem     Prints the results of a query one column per line.
rem
rem     Created as a 'authid current_user' so that it can
rem     access any table that the caller is allowed to access
rem     from the SQL*Plus prompt.
rem
rem     A suitably privilefed user could also make this publicly
rem     available through the public synonym and grant at the end
rem     of the script
rem
rem     Minor modifications by Jonathan Lewis to separate rows
rem     in the result set, and report the number of rows returned
rem

create or replace procedure print_table( p_query in varchar2 )
authid current_user
is
        l_theCursor     integer default dbms_sql.open_cursor;
        l_columnValue   varchar2(4000);
        l_status        integer;
        l_descTbl       dbms_sql.desc_tab;
        l_colCnt        number;
        l_rowcnt        number := 0;
begin
        execute immediate 'alter session set nls_date_format=''dd-mon-yyyy hh24:mi:ss'' ';

        dbms_sql.parse(  l_theCursor,  p_query, dbms_sql.native );
        dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );

        for i in 1 .. l_colCnt loop
                dbms_sql.define_column(l_theCursor, i, l_columnValue, 4000 );
        end loop;

        l_status := dbms_sql.execute(l_theCursor);

        while ( dbms_sql.fetch_rows(l_theCursor) > 0 ) loop
                for i in 1 .. l_colCnt loop
                        dbms_sql.column_value( l_theCursor, i, l_columnValue );
                        dbms_output.put_line(
                                rpad( l_descTbl(i).col_name, 30 )
                                || ' : ' || l_columnValue
                        );
                end loop;
                dbms_output.new_line;
                dbms_output.put_line( '-----------------' );
                dbms_output.new_line;
                l_rowcnt := l_rowcnt + 1;
        end loop;

        dbms_output.put_line(l_rowcnt || ' rows selected');

        execute immediate 'alter session set nls_date_format=''dd-mon-rr'' ';

exception
        when others then
                execute immediate 'alter session set nls_date_format=''dd-mon-rr'' ';
        raise;
end;
/

-- create public synonym print_table for print_table;
-- grant execute on print_table to public;

The reason I’m publishing this now is that I’m about to publish the third variant of a piece of code I wrote many years ago, and first blogged about in 2012.

Oracle Autonomous JSON Database (AJD) - From Notation to Autonomous

In August 13th, 2020, Oracle announced the availability of Autonomous JSON Database—a new cloud service built for developers who are looking for an easy to use,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Normal Forms

Tom Kyte - Wed, 2020-11-25 21:26
Hi Tom, Just wanted to know , if oracle database supports all the normal forms we know i.e 1st,2nd,3rd.BCNF,4th,5th. http://psoug.org/reference/normalization.html Is scott schema we use , is it fully normalized/or can be normalized till 5th normal form? In general, till which normal form it is sufficient to conclude that the created database is in agreement with the standards. Thanks, Raghavendra B
Categories: DBA Blogs

Directory List including modify date and times - casting the date problem

Tom Kyte - Wed, 2020-11-25 21:26
hi Tom, I started with your dir listing code and it's working out very nice but I am trying to figure out how to get the rest of the file info using File object. I am having a problem with the dates specifically. Here's my attempt but as you can see theres a problem with the dates and I cant seem to figure it out.. GRANT JAVAUSERPRIV to SOLIVER; drop table DIR_LIST; create global temporary table DIR_LIST ( filename varchar2(255),lastmodified date) on commit delete rows; create or replace and compile java source named "DirList" as import java.io.*; import java.sql.*; public class DirList { public static void getList(String directory) throws SQLException { String element; File path = new File(directory); File[] FileList = path.listFiles(); String TheFile; Date ModiDate; #sql { DELETE FROM DIR_LIST}; for(int i = 0; i < FileList.length; i++) { TheFile = FileList[i].getAbsolutePath(); ModiDate = new Date(FileList[i].lastModified()); #sql { INSERT INTO DIR_LIST (FILENAME,LASTMODIFIED) VALUES (:TheFile,:ModiDate) }; } } } / create or replace procedure get_dir_list( p_directory in varchar2 ) as language java name 'DirList.getList( java.lang.String )'; / --implementation EXEC dbms_java.grant_permission( 'SOLIVER', 'java.io.FilePermission', 'c: emp*','read' ); exec get_dir_list( 'c: emp' ); select to_char(lastmodified,'YYYY-MM-DD HH24:MI:SS'),filename from dir_list order by lastmodified; I would appreciate any help...
Categories: DBA Blogs

Delete on referenced partitioned table taking time

Tom Kyte - Wed, 2020-11-25 21:26
Hi Connor, Chris, I have referenced partitioned tables as below Its a daily range partition on ORDER_DATE column. <code> CREATE TABLE ORDERS ( ORDER_ID NUMBER(11), PART_DATE DATE DEFAULT trunc(sysdate), .... 150 columns .... ) PARTITION by range (PART_DATE) ( partition ORDERS_P_01_JAN_2018 values less than (to_date('02-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_02_JAN_2018 values less than (to_date('03-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_03_JAN_2018 values less than (to_date('04-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_04_JAN_2018 values less than (to_date('05-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_MAXVALUE values less than (maxvalue) ); CREATE TABLE ORDERS_DETAILS ( ORDER_ID NUMBER(11) NOT NULL, .... 70 columns .... CONSTRAINT FK_ORDER_ID FOREIGN KEY (ORDER_ID) REFERENCES ORDERS (ORDER_ID) ) PARTITION BY REFERENCE (FK_ORDER_ID); </code> The issue here is - it takes significant time to delete records from both child "ORDER_DETAILS" as well as parent "ORDERS". Daily partition record count of each table is around 800k in "ORDERS" & 2.7 million in "ORDER_DETAILS" and these table holds 2 months of data and rest gets purged on daily basis (using native delete statement). Earlier we were deleting these records using 50k records per iteration followed by commit. Then we tried the limit counter with 5k but it was taking same to finish purging of all records for that day. The next option we are planning to truncate partition instead of delete statement. But before doing so, would like to seek your help so that i can check more on delete taking time. Could you please suggest something which can be checked at our end.
Categories: DBA Blogs

DynamoDB PartiQL – part II: SELECT

Yann Neuhaus - Wed, 2020-11-25 14:29
By Franck Pachot

.
In the previous post I insertd a few rows in a Demo table using the SQL-like new API on DynamoDB. I checked my items with a SELECT but was limited in the ORDER BY clause. Here is the most important to understand: there are no additional data processing engine here. PartiQL (pronounce it like ‘particle’ and it helps to avoid any kind of dyslexia) parses a statement with INSERT/UPDATE/DELETE/SELECT and calls the NoSQL API you already know (Put/Get/Update/Delete Item, Query and Scan). It looks like SQL but SQL is a language that declares operations on a set of rows, like relational tables or views, which are a logical layer above the physical model. In RDBMS, you build your SQL queries according to your business needs, not the physical layout. Of course, the physical layout (like indexing, partitioning) is also optimized for this access, but this is done independently. With PartiQL on DynamoDB you must know which operation will happen when you write your statement. Because all the simplicity and scalability of DynamoDB resides on the bounded API that matches the physical layout:

  • GetItem does partition pruning + unique hash/index access to retrieve one item
  • Query does partition pruning + index access to retrieve a sorted range of items
  • Scan reads all partitions, possibly in parallel, to retrieve all items

Of course, the cost is different. With the DynamoDB API you know which one you are doing because you call a different operation. With PartiQL you should know what you are doing but you execute the same statement (SELECT) and the data access will depend on the columns referenced in WHERE clause. Basically, if you don’t have an equality predicate on the partition (HASH) key, you have to read all partitions (Scan). If you have an equality predicate on the partition (HASH) key and inequality on the sort (RANGE) key you benefit from partition pruning (Query). This is obvious when you know what is a hash function, but error-prone if you don’t know the data model. The DynamoDB API helps you to prevent that because your fingers should hurt when typing “scan” for a large table.

Scan

So, if what you want is actually get all items, because you need all of them, or maybe to filter out a small part of them only, you want a scan. Yes, it reads everything, but it is the most efficient access to read a large portion of your table. Because with one RCU you can get many items. Doing the same (getting all items) with GetItem would cost one RCU per item (I suppose strong consistency and small items here). To put it basically, for OLTP workload (many users reading few items) you avoid scans on large tables. DynamoDB is a key-value store: the goal is to access by the key. And for some reporting or to export data, you may scan, which is expensive (in time and RCU) but not done frequently.

As seen in the previous post, scanning the whole table to get all items with all attributes is a simple SELECT * FROM:


[opc@a aws]$ aws dynamodb execute-statement --statement "select * from Demo"

{"Items":[
{"MyKeyPart":{"N":"2"},"MyUnstructuredData":{"S":"a"},"MyKeySort":{"N":"1"}},
{"MyKeyPart":{"N":"2"},"MyUnstructuredData":{"S":"use parameters when embedding SQL in programs"},"MyKeySort":{"N":"2"}},
{"MyKeyPart":{"N":"2"},"MyUnstructuredData":{"S":"c"},"MyKeySort":{"N":"3"}},
{"MyKeyPart":{"N":"2"},"MyUnstructuredData":{"S":"d"},"MyKeySort":{"N":"4"}},
{"MyKeyPart":{"N":"2"},"MyUnstructuredData":{"S":"e"},"MyKeySort":{"N":"5"}},
{"MyKeyPart":{"N":"1"},"MyUnstructuredData":{"S":"here is my first insert :)"},"MyKeySort":{"N":"1"}}
]}

As long as there’s no equality predicate on the primary key (or the hash part of it in case of composite hash/sort key) the SELECT will do a scan. I mentioned “equality”, we will see later when there are many equality predicates or a list of values to be equal to. We will see later, probably in a further post, what happens with secondary indexes. Anyway, this is not a RDBMS. When you query the table, there’s no query planer to optimize the access to read from an index. If you want to access by a secondary index, the index name must be mentioned in the FROM clause.

Another thing that we have seen in the previous post is that, as it is a scan, you cannot have the partition key in the ORDER BY because DynamoDB does not sort the rows when retrieved from multiple partitions, and PartiQL do not do further data processing on the result. So, basically, there’s no possible ORDER BY when not having a WHERE clause on the partition key:



[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo order by MyKeyPart"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Must have WHERE clause in the statement when using ORDER BY clause.

[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo order by MyKeySort"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Must have WHERE clause in the statement when using ORDER BY clause.

Query

Then, if we query for one partition only, this is a Query rather than a Scan. Here is an example where I select only the items where MyKeyPart = 2 which, with the HASH function, maps to only one partition:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart = 2"

{"Items":[{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}}]}

The items are ordered by MyKeySort even in the absence of ORDER BY because this is how it is stored and retreived physically within each partition. But, as SQL is a declarative language, I prefer not to rely on the order without ORDER BY clause.

Here is the correct way to do it, with no additional cost:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart = 2 order by MyKeySort"
{"Items":[{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}}]}


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart = 2 order by MyKeySort desc"
{"Items":[{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}}]}

Here because there is only one value for MyKeyPart I didn’t need to put MyKeyPart in the ORDER BY, but with multiple values you need to:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo where MyKeyPart = 1  or MyKeyPart = 2 order by MyKeySort"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Must have hash key in ORDER BY clause when more than one hash key condition specified in WHERE clause.

[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo where MyKeyPart = 1  or MyKeyPart = 2 order by MyKeyPart,MyKeyPart desc"

{"Items":[{"MyKeyPart":{"N":"1"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}}]}

[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo where MyKeyPart = 1  or MyKeyPart = 2 order by MyKeyPart desc,MyKeyPart"

{"Items":[{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"1"},"MyKeySort":{"N":"1"}}]}

You might be surprised to see this query with multiple values run as a Query rather than a Scan. What if they come from multiple partitions?
This is possible when the number of values is well known in advance (“1” and “2” here) and then this can be sorted first, and a Query run for each of them. Of course, this will multiply the cost of it. For example, because I know that I inserted values 1 to 5, I can get all my items with:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo where MyKeyPart in [1,2,3,4,5] order by MyKeyPart,MyKeySort"

{"Items":[{"MyKeyPart":{"N":"1"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}}]}

So I’m able to get all items sorted now? Yes, but for a higher cost than a scan because it will query them one by one. I would be cheaper to Scan here but there is no optimizer to estimate the cost of both operations and choose the cheaper. But at least, the cost is predictable as it is proportional to the number of key values in the list.

I cannot use inequalities, or BETWEEN, because they work on a range and this Query access can be done only on known values.


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart between 2 and 2 order by MyKeyPart,MyKeySort"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Must have at least one non-optional hash key condition in WHERE clause when using ORDER BY clause.

Here, even if maths tells me that it is equivalent to equality (“MyKeyPart between 2 and 2” is the same as “MyKeyPart = 2”) we have no optimizer there to do those transformations. The rule is basic: a set of value can be sorted and queried individually but anything else is considered as a range of value that cannot be accessed with a hash function.

How can I be sure about this behaviour? I have a small table where the response time difference is not significant. Be best proof is to see what happens when full table scan is impossible. There’s an IAM policy to deny scans:

PartiQL documentation (https://t.co/ezzXfsyADB) explains the scenarios when a SELECT can become a SCAN. You can also explicitly deny Scan via IAM policy (https://t.co/elVs4HpvtM) to avoid full table scans.

— DynamoDB (@dynamodb) November 24, 2020


I have created a user with deny on “dynamodb:PartiQLSelect” action on condition “dynamodb:FullTableScan”=”True”

With this user profile I execute the following:


[opc@a aws]$ aws --profile noscan dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo"

An error occurred (AccessDeniedException) when calling the ExecuteStatement operation: User: arn:aws:iam::802756008554:user/ddb-noscan is not authorized to perform: dynamodb:PartiQLSelect on resource: arn:aws:dynamodb:eu-west-1:802756008554:table/Demo with an explicit deny

[opc@a aws]$ aws --profile noscan dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo where MyKeyPart in [1,2,3,4,5]"                

{"Items":[{"MyKeyPart":{"N":"1"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}}]}

It is clear that when Full Table Scan is denied the WHERE on a list of 5 values is still possible. Because it 5 query calls instead of a scan.

Pagination

I have additionally inserted many rows with MyKeyPart=10 and large size attributes, and query them:


$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart=10"

{"Items":[{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"5"}}],"NextToken":"CS4sUIPi4Efg7eSg4sGJZHJ09C/m8JWMwLXB+DF5n54EIBl6yuPZNAHfoRUFg7qgGg872qXswoXSZEI/XAIfvUPNisWSYGrPiquxLFakMecd6aF/ggaexxpKlhPS+ridkOXu8HoWIuWgSXFRBa32QmIXITRhrSMwuT1Q54+6Li6emcxvtpJfmxvxWf/yQkece5nqQIwH/EC3vAr1SZ4Pd537qexKejVHJ+2QrXALwG283UR/obWc53A2HTQ+G3cNeL4xOvVwp9gsOhlKxhsRrS+GqHRF0IHlGrpsdc0LkbMS1hISuagp/KZ0dqP/v7ejB6HsEHhFYZeKYZBoysTYTzhpB02NF3F4MSKp8QF4nO4vcq4="}

I get a few items and a “Next Token” that is quite large.

I can query the next pages with the –next-token option:


$ ws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart=10 \
--next-token CS4sUIPi4Efg7eSg4sGJZHJ09C/m8JWMwLXB+DF5n54EIBl6yuPZNAHfoRUFg7qgGg872qXswoXSZEI/XAIfvUPNisWSYGrPiquxLFakMecd6aF/ggaexxpKlhPS+ridkOXu8HoWIuWgSXFRBa32QmIXITRhrSMwuT1Q54+6Li6emcxvtpJfmxvxWf/yQkece5nqQIwH/EC3vAr1SZ4Pd537qexKejVHJ+2QrXALwG283UR/obWc53A2HTQ+G3cNeL4xOvVwp9gsOhlKxhsRrS+GqHRF0IHlGrpsdc0LkbMS1hISuagp/KZ0dqP/v7ejB6HsEHhFYZeKYZBoysTYTzhpB02NF3F4MSKp8QF4nO4vcq4="

{"Items":[{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"6"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"7"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"8"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"9"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"10"}}],"NextToken":"FjHEA2wnIK74SlGaS6TiPSv2fEwfiZhJNHyxvJ+qG750oeKlqSNyx9IDdCUD+m2rSpodPIFJhYYQHXBM9sJed3k6qaA/aUk4s4DUlPvZHl7WAJ4rTY0AmNDUYBPqWyCV8FliSsGPtFTfj1A9T4zD1TU6uuvNIORY/zKHtsAjWzT4Jsg5y32MFcVOmOsDBhyWsQotFqxy1ErMGhJy3cQnEvy1P1KpQak6sflzp3sWLWzUgOXQB/xF1PXRtT8w/E1lPk26LnA/L2bA91nucuohN63hP3MVojPH0GkPCjZsx08wJTn4MEpqDArEREWO2XCkL/GI7vTtYw6GXRenKZoatSG55yKCVDkFRuw7cbK749mEIb6r6Xs="}

Again, this is completely different from SQL databases where you have cursors, but this is adapted to DynamoDB query that reads ranges of items with small chunks.

Projection

I used SELECT with * to get the whole item key and attributes (like ALL_ATTRIBUTES), and with a list of attributes to do a query projection (like SPECIFIC_ATTRIBUTES). There’s no aggregation and I don’t think we can do the equivalent of COUNT. Here is how I would do it if it were possible:


[opc@a aws]$ aws dynamodb execute-statement --statement "select count(*) from Demo"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Unexpected path component at 1:8

This clearly not supported (yet).

According to the documentation expressions should be allowed, like this:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort,size(MyUnstructuredData) from Demo where MyKeyPart=2 and size(MyUnstructuredData)>10"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Unexpected path component at 1:28

Apparently, this size() function is allowed only on the WHERE clause:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart=2 and size(MyUnstructuredData)>10"

{"Items":[{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}}]}

In summary, be careful. You must know only which attribute you filter in the WHERE clause. Equality on partition key allows single hash partition access. Without it, it is a scan which can take time and lof of RCU.

Cet article DynamoDB PartiQL – part II: SELECT est apparu en premier sur Blog dbi services.

Oracle Fusion Procurement R13 -&nbsp; Reports and Analyses

Richard Byrom - Wed, 2020-11-25 12:32

With each major release Oracle produces a seeded listing of reports. Periodically with the quarterly updates one might see a a few new reports appear. I always find it quite useful to have this to hand when working with clients as it is searchable and you can filter by specific modules. Download the Oracle Procurement Cloud Release 13 Reports and Analyses Listing which outlines what’s available for Sourcing, Suppliers, Purchasing, Self Service Procurement and Purchasing.

Documentum Administrator plugins installation not possible

Yann Neuhaus - Wed, 2020-11-25 07:27

In some customer’s contexts you cannot install whatever you want. For policy reasons, or security. And documentum components really love to install client softwares to interact with them. In this blog we will discuss about the Documentum Administrator tool and the plugins it asks to install in order to gain full access to its functionalities.

Documentum Administrator’s plugins

DA requires at least 3 things to be installed on client machines in order to have the file transfer working:

  • The browser extension (IE, Chrome, Firefox, aso)
  • The NativeSetup.exe
  • You will have to change some settings in the browsers as well, in order to allow the execution of some scripts for these sites, white list it, and so on

These two programs allow to transfer and view files from the Documentum Administrator web application. I think it should be the same for Webtop as it’s based on the same. Unfortunately, some customers cannot allow users, even administrators to run specific exe files on their machines. So we tried to install them manually with some policies allowance and some elevated rights but it didn’t work very well as our policies were too restrictive.

In addition we were not allowed to change some parameters in the browsers as it was managed by the security team. So what can we do?

Setup Documentum Administrator in HTTP mode

Depending on what you need to do with the DA, you can enable the HTTP mode. This mode allows to use the default browser/http transfer protocols, hence it will not require anything to be installed on the end user machine. However this mode has some drawbacks and limitations.

Here is how to enable the HTTP mode:

  1. Login to the web server and navigate to the exploded war file folder
  2. Locate the file: da/wdk/app.xml
  3. Change <default-mechanism> in <contentxfer> to http
  4. Save the file and restart the web application server
  5. Allow popups in the browser

Now the content transfer has been set to http mode.

Limitations

HTTP content transfer is supported for XML files but only for a single file used with the Default XML Application. For virtual documents, only the root (parent) file is transferred. The browser handles the launch of viewing and editing applications.

The checkout directory is not configurable. To checkout a document, users must select Edit, then either save the document or open and save it. On checkin, the user must navigate to the location in which the document was saved. User preferences are not supported in this mode.

Feature UCF HTTP Download to client UCF Deployment at client side No client-side deployment Drag and drop Supported on IE Browser Not supported Viewing or editing application Configurable Controlled by browser ACS support Supported Limited support for export or edit; not supported for view with relative links Progress display Supported Supported only by certain browser Preferences Supported Not supported Restart interrupted operation Supported Not supported Checkout Supported Limited support. User must save and select checkout location Edit Supported Limited support. User must save and select checkout location Checkin Supported Limited support. User must navigate to saved document View Supported; does not transfer content if file is up to date on client Supported; always transfers content Export Supported Supported Import Supported Limited support. Single file selection at a time, no folder import Client xfer tracing Supported Not supported Server xfer tracing Supported Supported File compression Supported, with configurable exceptions Turn on HTTP compression in web.xml XML application Supported Import single file against Default XML Application Virtual document Supported Root only Temp working directory for upload/download temp.working.dir None Checked out files checkout.dir User must save file, then check in from file Exported files export.dir or user choice Browser specific UI to select a download location Viewed files viewed.dir Browser temporary internet files location, for example, $java{user.home}\Local Settings\Temporary Internet Files User location (base for other locations) user.dir (defined in config file) None Registry file location registry.file None Registry mode registry.mode None Log file location logs.dir None Tracing/debug tracing.enabled None File polling file.poll.interval None

Cet article Documentum Administrator plugins installation not possible est apparu en premier sur Blog dbi services.

Azure Certifications for Beginners, System Admins, Developers, Architects, Data and AI Professionals

Online Apps DBA - Wed, 2020-11-25 05:53

Cloud Computing is the Fastest Growing Industry with Public and Private cloud. High Salaries and a broad range of options for Cloud Computing Experts have made it a sought-after industry for IT professionals. Microsoft Azure is growing and providing lots of Certifications.· Do you want to know what certifications Microsoft offers for Azure Cloud? · […]

The post Azure Certifications for Beginners, System Admins, Developers, Architects, Data and AI Professionals appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Azure Event Hubs and Event Grid | Data Ingestion with Event Hub: Everything you Need to Know

Online Apps DBA - Wed, 2020-11-25 05:43

Big data has revolutionized the way we do business. Analyzing information in real-time to capture actionable insights can be the difference between maintaining your edge and lagging. But building the right infrastructure to manage your constant flow of data becomes complex. Azure Event Hubs streamline your data pipelines for you. So, the journey from input […]

The post Azure Event Hubs and Event Grid | Data Ingestion with Event Hub: Everything you Need to Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Documentum CTS reinstallation failing Null Pointer

Yann Neuhaus - Wed, 2020-11-25 03:37

I went into an issue by a customer were I had to upgrade in place the CTS. When you have to upgrade a CTS on the same server you first have to uninstall the old version in order to install the new one. Unfortunately, the uninstallation didn’t end well and some folders or configuration haven’t been removed properly.

Removing CTS manually

If the uninstallation failed with the uninstaller, you will not be able to run it again to remove all the stuff, as it thinks that it has already be uninstalled. So you’ll have to remove the components manually.

  1. Remove the 3 services for CTS: You will find 3 services with the name beginning with “Documentum”, remove them.
  2. Remove the installation folder: By default the installation folder is located in C:\Documentum, you can remove it or at least change its name.
  3. There are some keys in the registry, normally you can keep it as it doesn’t prevent the software to be reinstalled. But note that you can search for “Documentum” in the registry to find all concerned keys.
  4. Remove the DFC if it is still there (Add or Remove Programs).
  5. In the targeted repository, remove the folders System/Media Server, System/Applications/Transformation and System/Applications/Rich_Media_services. It is where the transformation profiles are located. Note that you may keep it as the latest installers can dump them in order to replicate your configurations.
Running the installer fails with a null pointer

I had the issue where I did all the recommendations above, and when I ran the installer it failed with a null pointer. Even in the GUI I saw a path trying to install here: null/Documentum.

It is even impossible to change it manually as it’s a path generated by the installer itself. I thought it was due to the registry keys but it wasn’t, I changed a lot of them manually and nothing changed.

The issue came from a file that wasn’t deleted by the uninstallation, this file handles information about the previous installation, it is located here: C:\Program Files\Zero G Registry\.com.zerog.registry.xml

You’ll have to display hidden files and folders. Delete this file and the installation will go smoothly now.

Cet article Documentum CTS reinstallation failing Null Pointer est apparu en premier sur Blog dbi services.

Terraform Beginner’s Guide

Online Apps DBA - Wed, 2020-11-25 03:21

Are you a beginner and planning to learn Terraform, then this blog post is for you! Check out this blog at k21academy.com/terraform17 to know all the basic fundamentals of Terraform and how it works before you go diving deep. The blog covers: • Infrastructure as Code and IaC tools • What Is Terraform? • Terraform Lifecycle […]

The post Terraform Beginner’s Guide appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Finding the number of rows in each table by a single sql

Tom Kyte - Wed, 2020-11-25 03:06
hi tom i have a databse with 125 tables. i can find the total number of tables of the database by the sql select * from tab; now i would like to know the number of rows in each table of my database with out executing select count(*) from...; each time. please help me sincerely rajesh
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator