New things:
Oracle 12c!!
· Cluster Health
Advisor – chactl, ochad, ora.chad
· Memory guard –
qosmserver
· OCLUMON Command
Reference
· MAX_STRING_SIZE
· PGA_AGGREGATE_LIMIT
· THREADED_EXECUTION
· Alert.log: New
timestamp format
· Flex Cluster –
Hub Nodes, Leaf Nodes
· Flex ASM
Cluster Health Advisor – chactl, ochad, ora.chad
Oracle Cluster Health Advisor runs as a highly available cluster
resource, ochad, on each node in the cluster.
Each Oracle Cluster Health Advisor daemon (ochad) monitors the
operating system on the cluster node and optionally, each Oracle Real
Application Clusters (Oracle RAC) database instance on the node.
Oracle Cluster Health Advisor raises and clears problems autonomously
and stores the history in the Grid Infrastructure Management Repository (GIMR).
The Oracle Grid Infrastructure user can query the stored information
using CHACTL.
$ crsctl stat res ora.chad
-p | grep "DEPENDENCIES\b"
START_DEPENDENCIES=hard(global:ora.mgmtdb)
pullup(global:ora.mgmtdb)
STOP_DEPENDENCIES=hard(global:intermediate:ora.mgmtdb)
$ chactl status -verbose
monitoring nodes
node-ind01, node-ind02 using model DEFAULT_CLUSTER
not monitoring
databases
$ chactl query model
Models:
DEFAULT_CLUSTER, DEFAULT_DB
Memory guard – qosmserver
Memory Guard continuously monitors and ensures the availability of
cluster nodes by preventing the nodes from being evicted when the nodes are
stressed due to lack of memory.
Memory Guard gets the information from Cluster Health Monitor.
Prevents new database sessions from being created on the afflicted
node.
Stops all CRS-managed services transactionally on the node, allowing
the existing workload on the node to complete and free their memory.
When Memory Guard determines that the memory stress has been
relieved, it restores connectivity to the node, allowing new sessions to be
created on that node.
Memory Guard is automatically enabled when you install Oracle Grid
Infrastructure for an Oracle Real Application Clusters (Oracle RAC) or an
Oracle RAC One Node database.
$ srvctl status qosmserver
QoS Management
Server is enabled.
QoS Management
Server is running on node node-ind01.
OCLUMON Command Reference
Use the command-line tool to query the
Cluster Health Monitor repository to display node-specific metrics for a
specific time period.
$ oclumon version
Cluster Health
Monitor (OS), Version 12.2.0.1.0 - Production Copyright 2007, 2016 Oracle. All
rights reserved.
$ oclumon manage -get MASTER
Master =
node-ind01
MAX_STRING_SIZE
In pre-12c database releases,
VARCHAR2 and NVARCHAR2 data types were limited to 4000 bytes, while RAW data
type was limited to 2000 bytes.
If there is a need to store more data
in the VARCHAR2 column, you can store it in 12c after setting the parameter
MAX_STRING_SIZE to EXTENDED.
With the setting MAX_STRING_SIZE to
EXTENDED, you can store up to 32KB (32767 bytes) in the VARCHAR2, NVARCHAR2,
and RAW columns.
Although it is an initialization
parameter, there are a few restrictions and requirements around the parameter setting.
The default for MAX_STRING_SIZE is
STANDARD in Oracle Database 12c, where the maximum size for VARCHAR2, NVARCHAR2,
and RAW are the same as previous Oracle
releases.
To change the value of this parameter
to extended, you have to follow certain steps, and once you set it to EXTENDED,
there is no going back to STANDARD.
The feature is introduced to help migrating data from
non-Oracle databases to Oracle, where the non-Oracle databases have a varchar
maximum size of well over 4K.
PGA_AGGREGATE_LIMIT
PGA_AGGREGATE_LIMIT
parameter introduced from 12c sets a hard limit on the PGA size. If we do not
explicitly set a limit (provide a value), Oracle sets the default by picking
the biggest value from this list:
2 GB
3 MB times the value of the processes parameter
2 times the value of the parameter pga_aggregate_target
2 GB
3 MB times the value of the processes parameter
2 times the value of the parameter pga_aggregate_target
When
the limit is reached, the database engine terminates calls or kills sessions
with ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT.
We can set the parameter to 0, to disable the feature or to have no
limit.
Also, if we plan to manually set the PGA_AGGREGATE_LIMIT, it must be
bigger than the default value derived by Oracle, else will get an error.
THREADED_EXECUTION
When an Oracle database is running on Linux/UNIX
systems you see several processes, whereas on Windows the “oracle.exe” has
several threads. By setting the THREADED_EXECUTION
parameter to TRUE, you can enable the multithreaded
model on Linux/UNIX systems.
The multithreaded Oracle model enables Oracle
processes on UNIX and Linux to run as operating system threads in separate
address spaces.
In the default process model, SPID and STID columns
of V$PROCESS will have the same values, whereas in the multithreaded model,
each SPID (process) will have multiple STID
(threads) values. The EXECUTION_TYPE column in V$PROCESS
will show THREAD.
When using threaded execution, you will not be able
to log in to the database using operating system authentication such as “CONNECT
/ AS SYSDBA”. Also, for connections to work, you
must have DEDICATED_THROUGH_BROKER_<listenername>=ON
set up in the listener.ora file.
The database initialization parameter
USE_DEDICATED_BROKER is set to TRUE
automatically when THREADED_EXECUTION=TRUE. When this
parameter is set to false (the default in process mode), the listener spawns a
dedicated server directly.
Alert.log: New timestamp format
If you compare the timestamps in
Oracle 12.1 vs Oracle 12.2 you will notice change in the format of timestamp:
Oracle 12.1
Wed Feb 08 15:39:29 2018
Oracle 12.2
2018-05-29T14:09:17.064493+02:00
If you wish to revert to the old
display format, please use the init.ora/spfile parameter UNIFORM_LOG_TIMESTAMP_FORMAT. Default setting is
TRUE
. Once you
switch it to FALSE
the timestamp in the
alert.log is in pre-Oracle-12.2 format dynamically.ALTER SYSTEM SET uniform_log_timestamp_format=FALSE SCOPE=BOTH;
Flex Cluster – Hub Nodes, Leaf Nodes
Starting with Oracle Grid Infrastructure
12c Release 2 (12.2), Oracle Grid
Infrastructure cluster configurations are Oracle Flex Clusters deployments.
A Flex Cluster has hub nodes and leaf
nodes. A hub node is the cluster node we all know: it mounts the shared
storage, it is attached to the cluster interconnect, it may or may not run
database instances, it runs an ASM instance.
A leaf node does not mount the shared
storage, and therefore it cannot run ASM or a database instance. It is however
attached to the interconnect and is part of the cluster. It does run Grid
Infrastructure.
Leaf Nodes are different from standard
Oracle Grid Infrastructure nodes, in that they do not require direct access to
shared storage, but instead request data through Hub Nodes. Hub Nodes can run
in an Oracle Flex Cluster configuration without having any Leaf Nodes as
cluster member nodes, but Leaf Nodes must be members of a cluster that includes
at least one Hub Node.
Hub Nodes: They are connected among them via private
network and have direct access to the shared storage just like previous
versions. These nodes are the ones that access the Oracle Cluster Registry
(OCR) and Voting Disk (VD) directly.
Leaf Nodes: These nodes are lighter and are not
connected among them, neither access the shared storage like the Hub Nodes.
Each Leaf Node communicates with the Hub Node that is attached to, and it’s
connected to the cluster via the Hub Node that is linked to.
$ crsctl
get cluster type
CRS-6539: The cluster type is 'flex'.
$ crsctl
get cluster name
CRS-6724: Current cluster name is 'node-ind-clu'
$ crsctl
get cluster class
CRS-41008: Cluster class is 'Standalone Cluster'
$ crsctl
get node role status -all
Node 'node-ind01' active role is 'hub'
Node 'node-ind02' active role is 'hub'
Flex ASM
The Oracle Flex ASM can be implemented in two
ways:
Pure 12c Flex ASM (Same Version)
Pure 12c Flex ASM (Same Version)
Both Grid Infrastructure (GI) and Database
running on Oracle 12c
Pre Oracle 12c Mixed (Different Versions)
As normal ASM instance will run on each node with Flex configuration to support pre 12c database. The compatibility parameter of ASM disk group is used to manage the compatibility of between and among database instances. Advantage of this approach is that if an Oracle 12c database instance loses connectivity with an ASM instance, then the database connections will failover to another ASM instance on a different server. This failover is achieved by setting the cardinality to all.
Pre Oracle 12c Mixed (Different Versions)
As normal ASM instance will run on each node with Flex configuration to support pre 12c database. The compatibility parameter of ASM disk group is used to manage the compatibility of between and among database instances. Advantage of this approach is that if an Oracle 12c database instance loses connectivity with an ASM instance, then the database connections will failover to another ASM instance on a different server. This failover is achieved by setting the cardinality to all.
Database instances connect to any ASM instance in the cluster.
The number of ASM instances running in a
given cluster is called ASM cardinality, administrators specify the cardinality
of ASM instances (default is 3)
Flex ASM eliminates 1:1 mapping between DB instance & ASM
instance
With Flex ASM Oracle 12c, a new type of network is called the ASM
network. This
network is used for communication between ASM and its clients and is accessible
on all the nodes.
$ asmcmd
ASMCMD> showclustermode
ASM cluster : Flex mode enabled
ASMCMD> showclusterstate
Normal
$ crsctl
status resource ora.asm -f | grep -i DEP
RELOCATE_BY_DEPENDENCY=0
START_DEPENDENCIES=hard(ora.ASMNET1LSNR_ASM.lsnr)
weak(ora.LISTENER.lsnr) pullup(ora.ASMNET1LSNR_ASM.lsnr)
dispersion:active(site:type:ora.asm.type)
STOP_DEPENDENCIES=hard(intermediate:ora.ASMNET1LSNR_ASM.lsnr)
Oracle ASM Configuration Assistant (ASMCA) can be used to enable
Oracle Flex ASM after the installation / upgrade is performed