[Free] 2019(Nov) EnsurePass Oracle 1z0-062 Dumps with VCE and PDF 51-60

Get Full Version of the Exam
http://www.EnsurePass.com/1z0-062.html

Question No.51

Examine the contents of SQL loader control file:

image

Which three statements are true regarding the SQL* Loader operation performed using the control file? (Choose three.)

  1. An EMP table is created if a table does not exist. Otherwise, if the EMP table is appended with the loaded data.

  2. The SQL* Loader data file myfile1.dat has the column names for the EMP table.

  3. The SQL* Loader operation fails because no record terminators are specified.

  4. Field names should be the first line in the both the SQL* Loader data files.

  5. The SQL* Loader operation assumes that the file must be a stream record format file with the normal carriage return string as the record terminator.

Correct Answer: ABE

Explanation:

A: The APPEND keyword tells SQL*Loader to preserve any preexisting data in the table. Other options allow you to delete preexisting data, or to fail with an error if the table is not empty to begin with.

B (not D):

SQL*Loader-00210: first data file is empty, cannot process the FIELD NAMES record Cause: The data file listed in the next message was empty. Therefore, the FIELD NAMES FIRST FILE directive could not be processed.

Action: Check the listed data file and fix it. Then retry the operation E:

A comma-separated values (CSV) (also sometimes called character-separated values, because the separator character does not have to be a comma) file stores tabular data (numbers and text) in plain-text form. Plain text means that the file is a sequence of characters, with no data that has to be interpreted instead, as binary numbers. A CSV file consists of any number of records, separated by line breaks of some kind; each record consists of fields, separated by some other character or string, most commonly a literal comma or tab. Usually, all records have an identical sequence of fields.

Fields with embedded commas must be quoted. Example:

1997,Ford,E350,quot;Super, luxurious truckquot;

Note:

SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle database.

Question No.52

Identify three valid methods of opening, pluggable databases (PDBs).

  1. ALTER PLUGGABLE DATABASE OPEN ALL ISSUED from the root

  2. ALTER PLUGGABLE DATABASE OPEN ALL ISSUED from a PDB

  3. ALTER PLUGGABLE DATABASE PDB OPEN issued from the seed

  4. ALTER DATABASE PDB OPEN issued from the root

  5. ALTER DATABASE OPEN issued from that PDB

  6. ALTER PLUGGABLE DATABASE PDB OPEN issued from another PDB

  7. ALTER PLUGGABLE DATABASE OPEN issued from that PDB

Correct Answer: AEG

Explanation:

E: You can perform all ALTER PLUGGABLE DATABASE tasks by connecting to a PDB and running the corresponding ALTER DATABASE statement. This functionality is provided to maintain backward compatibility for applications that have been migrated to a CDB environment. AG: When you issue an ALTER PLUGGABLE DATABASE OPEN statement, READ WRITE is the default unless a PDB being opened belongs to a CDB that is used as a physical standby database, in which case READ ONLY is the default. You can specify which PDBs to modify in the following ways:

List one or more PDBs.

Specify ALL to modify all of the PDBs.

Specify ALL EXCEPT to modify all of the PDBs, except for the PDBs listed.

Question No.53

On your Oracle 12c database, you invoked SQL *Loader to load data into the EMPLOYEES table in the HR schema by issuing the following command:

$gt; sqlldr hr/hr@pdb table=employees

Which two statements are true regarding the command? (Choose two.)

  1. It succeeds with default settings if the EMPLOYEES table belonging to HR is already defined in the database.

  2. It fails because no SQL *Loader data file location is specified.

  3. It fails if the HR user does not have the CREATE ANY DIRECTORY privilege.

  4. It fails because no SQL *Loader control file location is specified.

Correct Answer: AC

Explanation:

SQL*Loader is invoked when you specify the sqlldr command and, optionally, parameters that establish session characteristics.

Question No.54

Examine this command:

SQL gt; exec DBMS_STATS.SET_TABLE_PREFS (`SH#39;, `CUSTOMERS#39;, `PUBLISH#39;, `false#39;);

Which three statements are true about the effect of this command? (Choose three.)

  1. Statistics collection is not done for the CUSTOMERS table when schema stats are gathered.

  2. Statistics collection is not done for the CUSTOMERS table when database stats are gathered.

  3. Any existing statistics for the CUSTOMERS table are still available to the optimizer at parse time.

  4. Statistics gathered on the CUSTOMERS table when schema stats are gathered are stored as pending statistics.

  5. Statistics gathered on the CUSTOMERS table when database stats are gathered are stored as pending statistics.

Correct Answer: CDE Explanation: SET_TABLE_PREFS Procedure

This procedure is used to set the statistics preferences of the specified table in the specified schema.

Example:

Using Pending Statistics

Assume many modifications have been made to the employees table since the last time statistics were gathered. To ensure that the cost-based optimizer is still picking the best plan, statistics should be gathered once again; however, the user is concerned that new statistics will cause the optimizer to choose bad plans when the current ones are acceptable. The user can do the following:

EXEC DBMS_STATS.SET_TABLE_PREFS(#39;hr#39;, #39;employees#39;, #39;PUBLISH#39;, #39;false#39;);

By setting the employees tables publish preference to FALSE, any statistics gather from now on will not be automatically published. The newly gathered statistics will be marked as pending.

Question No.55

You execute the following commands to audit database activities:

SQL gt; ALTER SYSTEM SET AUDIT_TRIAL=DB, EXTENDED SCOPE=SPFILE;

SQL gt; AUDIT SELECT TABLE, INSERT TABLE, DELETE TABLE BY JOHN By SESSION WHENEVER SUCCESSFUL;

Which statement is true about the audit record that generated when auditing after instance restarts?

  1. One audit record is created for every successful execution of a SELECT, INSERT OR DELETE command on a table, and contains the SQL text for the SQL Statements.

  2. One audit record is created for every successful execution of a SELECT, INSERT OR DELETE command, and contains the execution plan for the SQL statements.

  3. One audit record is created for the whole session if john successfully executes a SELECT, INSERT, or DELETE command, and contains the execution plan for the SQL statements.

  4. One audit record is created for the whole session if JOHN successfully executes a select command, and contains the SQL text and bind variables used.

  5. One audit record is created for the whole session if john successfully executes a SELECT, INSERT, or DELETE command on a table, and contains the execution plan, SQL text, and bind

variables used.

Correct Answer: A

Explanation:

BY SESSION

In earlier releases, BY SESSION caused the database to write a single record for all SQL statements or operations of the same type executed on the same schema objects in the same session. Beginning with this release (11g) of Oracle Database, both BY SESSION and BY ACCESS cause Oracle Database to write one audit record for each audited statement and operation.

BY ACCESS

Specify BY ACCESS if you want Oracle Database to write one record for each audited statement and operation.

Note:

If you specify either a SQL statement shortcut or a system privilege that audits a data definition language (DDL) statement, then the database always audits by access. In all other cases, the database honors the BY SESSION or BY ACCESS specification.

For each audited operation, Oracle Database produces an audit record containing this information:

The user performing the operation The type of operation

The object involved in the operation The date and time of the operation

Question No.56

You notice a performance change in your production Oracle database and you want to know which change has made this performance difference.

You generate the Compare Period Automatic Database Diagnostic Monitor (ADDM) report to further investigation.

Which three findings would you get from the report? (Choose three.)

  1. It detects any configuration change that caused a performance difference in both time periods.

  2. It identifies any workload change that caused a performance difference in both time periods.

  3. It detects the top wait events causing performance degradation.

  4. It shows the resource usage for CPU, memory, and I/O in both time periods.

  5. It shows the difference in the size of memory pools in both time periods.

  6. It gives information about statistics collection in both time periods.

Correct Answer: ABD

Explanation:

Keyword: shows the difference.

Full ADDM analysis across two AWR snapshot periods Detects causes, measure effects, then correlates them Causes: workload changes, configuration changes

Effects: regressed SQL, reach resource limits (CPU, I/O, memory, interconnect) Makes actionable recommendations along with quantified impact

Identify what changed

Configuration changes, workload changes

Performance degradation of the database occurs when your database was performing optimally in the past, such as 6 months ago, but has gradually degraded to a point where it becomes noticeable to the users. The Automatic Workload Repository (AWR) Compare Periods report enables you to compare database performance between two periods of time.

While an AWR report shows AWR data between two snapshots (or two points in time), the AWR Compare Periods report shows the difference (ABE) between two periods (or two AWR reports

with a total of four snapshots). Using the AWR Compare Periods report helps you to identify detailed performance attributes and configuration settings that differ between two time periods.

Question No.57

You are about to plug a multi-terabyte non-CDB into an existing multitenant container database (CDB).

The characteristics of the non-CDB are as follows:

image

image

Version: Oracle Database 11g Release 2 (11.2.0.2.0) 64-bit Character set: AL32UTF8

image

image

National character set: AL16UTF16 O/S: Oracle Linux 6 64-bit

The characteristics of the CDB are as follows:

image

image

Version: Oracle Database 12c Release 1 64-bit Character Set: AL32UTF8

image

image

National character set: AL16UTF16 O/S: Oracle Linux 6 64-bit

Which technique should you use to minimize down time while plugging this non-CDB into the CDB?

  1. Transportable database

  2. Transportable tablespace

  3. Data Pump full export/import

  4. The DBMS_PDB package

  5. RMAN

Correct Answer: B Explanation: Overview, example:

Log into ncdb12c as sys

Get the database in a consistent state by shutting it down cleanly. Open the database in read only mode

Run DBMS_PDB.DESCRIBE to create an XML file describing the database. Shut down ncdb12c

Connect to target CDB (CDB2)

Check whether non-cdb (NCDB12c) can be plugged into CDB(CDB2) Plug-in Non-CDB (NCDB12c) as PDB(NCDB12c) into target CDB(CDB2). Access the PDB and run the noncdb_to_pdb.sql script.

Open the new PDB in read/write mode.

You can easily plug an Oracle Database 12c non-CDB into a CDB. Just create a PDB manifest file for the non-CDB, and then use the manifest file to create a cloned PDB in the CDB.

Note that to plug in a non-CDB database into a CDB, the non-CDB database needs to be of version 12c as well. So existing 11g databases will need to be upgraded to 12c before they can be part of a 12c CDB.

Question No.58

An application accesses a small lookup table frequently. You notice that the required data blocks are getting aged out of the default buffer cache. How would you guarantee that the blocks for the table never age out?

  1. Configure the KEEP buffer pool and alter the table with the corresponding storage clause.

  2. Increase the database buffer cache size.

  3. Configure the RECYCLE buffer pool and alter the table with the corresponding storage clause.

  4. Configure Automata Shared Memory Management.

  5. Configure Automatic Memory Management.

Correct Answer: A

Explanation:

Schema objects are referenced with varying usage patterns; therefore, their cache behavior may be quite different. Multiple buffer pools enable you to address these differences. You can use a KEEP buffer pool to maintain objects in the buffer cache and a RECYCLE buffer pool to prevent objects from consuming unnecessary space in the cache. When an object is allocated to a cache, all blocks from that object are placed in that cache. Oracle maintains a DEFAULT buffer pool for objects that have not been assigned to one of the buffer pools.

Question No.59

You configure your database Instance to support shared server connections. Which two memory areas that are part of PGA are stored in SGA instead, for shared server connection? (Choose two.)

  1. User session data

  2. Stack space

  3. Private SQL area

  4. Location of the runtime area for DML and DDL Statements

  5. Location of a part of the runtime area for SELECT statements

Correct Answer: AC

Explanation:

A: PGA itself is subdivided. The UGA (User Global Area) contains session state information, including stuff like package-level variables, cursor state, etc. Note that, with shared server, the UGA is in the SGA. It has to be, because shared server means that the session state needs to be accessible to all server processes, as any one of them could be assigned a particular session.

However, with dedicated server (which likely what you#39;re using), the UGA is allocated in the PGA. C: The Location of a private SQL area depends on the type of connection established for a session. If a session is connected through a dedicated server, private SQL areas are located in the server process#39; PGA. However, if a session is connected through a shared server, part of the private SQL area is kept in the SGA.

Note:

System global area (SGA)

The SGA is a group of shared memory structures, known as SGA components, that contain data and control information for one Oracle Database instance. The SGA is shared by all server and background processes. Examples of data stored in the SGA include cached data blocks and shared SQL areas.

Program global area (PGA)

A PGA is a memory region that contains data and control information for a server process. It is nonshared memory created by Oracle Database when a server process is started. Access to the PGA is exclusive to the server process. There is one PGA for each server process. Background processes also allocate their own PGAs. The total memory used by all individual PGAs is known as the total instance PGA memory, and the collection of individual PGAs is referred to as the total instance PGA, or just instance PGA. You use database initialization parameters to set the size of the instance PGA, not individual PGAs.

Question No.60

You plan to create a database by using the Database Configuration Assistant (DBCA), with the following specifications:

image

image

Applications will connect to the database via a middle tier. The number of concurrent user connections will be high.

image

The database will have mixed workload, with the execution of complex BI queries scheduled at night.

Which DBCA option must you choose to create the database?

  1. a General Purpose database template with default memory allocation

  2. a Data Warehouse database template, with the dedicated server mode option and AMM enabled

  3. a General Purpose database template, with the shared server mode option and Automatic Memory Management (AMM) enabled

  4. a default database configuration

Correct Answer: C

Explanation:

http://www.oracledistilled.com/oracle-database/administration/creating-a-database-using- database-configuration-assistant/

Get Full Version of the Exam
1z0-062 Dumps
1z0-062 VCE and PDF

You must be logged in to post a comment.

Proudly powered by WordPress   Premium Style Theme by www.gopiplus.com