Friday, January 24, 2014

Multi-Data Center Implemenation in Oracle Access Manager

For obvious reasons, there is a high demand for Multi-Data Center (MDC) topology; which is now supported in Oracle Access Manager (OAM) 11g.  This post discusses some of the features of MDC as well as provide some detail steps on how to clone a secondary data center.  This post is based on R2PS1 code base.  With PS2 there are some new features that I will cover below.  Here is the PS2 document library for reference.

Here is an conceptual topology for an MDC deployment.


mdc-pic1


This should be pretty self-explanatory.  Notice the Global Load Balancers (GLBR); both the New York and London data centers must be front-ended with a GLBR for MDC support.  This allows the a user request to be routed to a different data center when:
  • The data center goes down.
  • There is a load spike causing redistribution of traffic.
  • Certain applications are deployed in only one data center.
  • WebGates are configured to load balance within one data center but failover across data centers.

 

 

Deployment

There are two parts to deploying MDC. The first part is 'cloning' the configuration from the master site to a secondary site using the Test-to-Production (T2P) process. The second part is to enable the MDC configuration so that each partner site is aware of each other.   This post will only cover the T2P procedure.  T2P is not new; however, many of our legacy OAM customers may not be familiar with T2P.  I will describe the commands I executed to clone a master site to a secondary site using T2P.

More details on T2P can be found in Oracle Fusion Middleware guide here.

MDC supports both active-active and active-passive/stand-by scenarios.  The following prerequisites must be satisfied before deploying Multi-Data Centers:
  • All Data Center clusters must be front ended by a single Load Balancer.
  • Clocks on the machines in which Access Manager and agents are deployed must be in sync. Non-MDC Access Manager clusters require the clocks of WebGate agents be in sync with Access Manager servers. This requirement applies to the MDC as well. If the clocks are out of sync, token validations will not be consistent resulting in deviations from the expected behaviors regarding the token expiry interval, validity interval, timeouts and the like.
  • The identity stores in a Multi-Data Center topology must have the same Name.
High-level Steps:
  • The first Data Center is designated as Master and will be cloned (using T2P tools) for additional Data Centers.
  • All configuration and policy changes are propagated from the Master to the Clone using the WLST commands provided as part of the T2P Tooling.
  • Each Data Center is a separate WebLogic Domain and the install topology is the same.
Below are the steps I used to clone a master data center of two OAM servers in a cluster to a secondary data center.
For more details on the scripts I used , please check the documentation here.
Detailed steps:
The two steps below are only required for OAM version R2PS1.  Exporting/importing the schema in PS2 is no longer required.  There is a new feature called 'Automatic Policy Synchronization' (APS).  Click here to learn more.
  • Export the OPSS schema from the 'master' DB instance.  Set the ORACLE_HOME to the db home directory and execute the 'expdp' command.
export ORACLE_HOME=/u01/DB/product/11.2.0/dbhome_1/bin
./expdp system/welcome1@db11g DIRECTORY=DATA_PUMP_DIR SCHEMAS=STMTEST_OPSS DUMPFILE=export_TEST.dmp PARALLEL=2 LOGFILE=export.log

output:
 Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u01/db11g/admin/db11g/dpdump/export_TEST.dmp

  • Import the OPSS schema to the secondary/cloned DB.  Make sure that the schema on the secondary/cloned DB instance is loaded via RCU. Load both the OAM and OPSS schema on the secondary DB instance and note down the schema names.
./impdp system/welcome1@orcl DIRECTORY=DATA_PUMP_DIR DUMPFILE=export_TEST.dmp PARALLEL=2 LOGFILE=import.log remap_schema=STMTEST_OPSS:STMPROD_OPSS remap_tablespace=STMTEST_IAS_OPSS:STMPROD_IAS_OPSS TABLE_EXISTS_ACTION=REPLACE

  • On the 'master' machine you need to copy the binaries. Server state is immaterial. Make sure to create the /oam_cln_log directory first.  I also recommend you create a separate directory to store MDC related artifacts; for example /u01/MDC_FILES.
cd /u01/IAM1/Middleware/oracle_common/bin
copyBinary.sh -javaHome /home/oracle/java/jdk1.7.0_10 -archiveLoc /u01/MDC_FILES/oamt2pbin.jar -sourceMWHomeLoc /u01/IAM1/Middleware -idw true -ipl /u01/IAM1/Middleware/oracle_common/oraInst.loc -silent true -ldl /u01/MDC_FILES/oam_cln_log

  • On the 'master' machine you need to copy the configuration. Both the administration server and all managed servers need to up and running. The Weblogic server must also be in production mode.
copyConfig.sh -javaHome /home/oracle/java/jdk1.7.0_10 -archiveLoc /u01/MDC_FILES/oamt2pConfig.jar -sourceDomainLoc /u01/IAM1/Middleware/user_projects/domains/IAMDomain -sourceMWHomeLoc /u01/IAM1/Middleware -domainHostName iam1.us.oracle.com -domainPortNum 7001 -domainAdminUserName weblogic -domainAdminPassword /u01/MDC_FILES/t2p_domain_pass.txt -silent true -ldl /u01/MDC_FILES/oam_cln_log_config

The following commands are to be executed on the 'clone' machine.
  • Copy the following files from the master environment: oamt2pbin.jar, oamt2pConfig.jar, pasteBinary.sh, oraInst.loc and cloningclient.jar.  The oamt2pbin and oamt2pConfig jar files should have been created with the copy commands above.  The cloningclient.jar, pasteBinary.sh and oraInst.loc can be found within the /oracle_common directory.
  • Using the pasteBinary.sh script will copy the binary data (oamt2pbin.jar) to the new server.  No Oracle software with the exception of the Java should be installed on the new machine.  In this example, a place-holder directory /u01/IAM1 and /u01/MDC_FILES/oam_cln_log needs to exists before running the command below.
./pasteBinary.sh -javaHome /home/oracle/java/jdk1.7.0_10 -al /u01/MDC_FILES/oamt2pbin.jar -tmw /u01/IAM1/Middleware -silent true -idw true -esp false -ipl /u01/MDC_FILES/oraInst.loc -ldl /u01/MDC_FILES/oam_cln_log -silent true
  • Next we need to extract a move plan file.  This file allows you to modify some of the details of the new environment.  The script is called 'extractMovePlan.sh' and is located under /oracle_common/bin.
./extractMovePlan.sh -javaHome /home/oracle/java/jdk1.7.0_10 -al /u01/MDC_FILEYou should now be able to start the Administration/OAM servers on the clone machine.S/oamt2pConfig.jar -planDirLoc /u01/MDC_FILES/moveplan/

Once the 'moveplan.xml' was created, I changed the following:
  • All host names endpoints.  For example, my Master host name was iam1.us.oracle.com, I changed this to iam2.us.oracle.com.  If you have multiple components on the same machine make sure you modify all properties that make sense in you deployment.
                     <configProperty>
                        <name>Listen Address</name>
                        <value>iam2.us.oracle.com</value>
                        <itemMetadata>
                            <dataType>STRING</dataType>
                            <scope>READ_WRITE</scope>
                        </itemMetadata>
                    </configProperty>

  • WLS machine name and Node Manager host name.
            <configGroup>
                <type>MACHINE_CONFIG</type>
                <configProperty id="Machine1">
                    <configProperty>
                        <name>Machine Name</name>
                        <value>IAM2</value>
                        <itemMetadata>
                            <dataType>STRING</dataType>
                            <scope>READ_WRITE</scope>
                        </itemMetadata>
                    </configProperty>
                    <configProperty>
                        <name>Node Manager Listen Address</name>
                        <value>iam2.us.oracle.com</value>
                        <itemMetadata>
                            <dataType>STRING</dataType>
                            <scope>READ_WRITE</scope>
                        </itemMetadata>
                    </configProperty>
                    <configProperty>
                        <name>Node Manager Listen Port</name>
                        <value>5556</value>
                        <itemMetadata>
                            <dataType>INTEGER</dataType>
                            <scope>READ_WRITE</scope>
                        </itemMetadata>
                    </configProperty>
                </configProperty>
            </configGroup>

  • Schema owners.  Make sure you change both the OPSS and OAM schema configuration property.
                    <configProperty>
                        <name>User</name>
                        <value>MDC2_OPSS</value>
                        <itemMetadata>
                            <dataType>STRING</dataType>
                            <scope>READ_WRITE</scope>
                        </itemMetadata>
                    </configProperty>

                    <configProperty>
                        <name>User</name>
                        <value>MDC2_OAM</value>
                        <itemMetadata>
                            <dataType>STRING</dataType>
                            <scope>READ_WRITE</scope>
                        </itemMetadata>
                    </configProperty>

  • Now we paste the configuration on the target/clone machine using the 'moveplan.xml' we just modified.
./pasteConfig.sh -javaHome /home/oracle/java/jdk1.7.0_10 -archiveLoc /u01/MDC_FILES/oamt2pConfig.jar -targetMWHomeLoc /u01/IAM1/Middleware -targetDomainLoc /u01/IAM1/Middleware/user_projects/domains/IAMDomain -movePlanLoc /u01/MDC_FILES/moveplan/moveplan.xml -domainAdminPassword /u01/MDC_FILES/t2p_domain_pass.txt -ldl /u01/MDC_FILES/oam_cln_log_paste -silent true

You should now be able to start the Administration/OAM servers on the secondary/cloned machine.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.