Quantcast
Channel: Languages – AMIS Technology Blog | Oracle – Microsoft Azure
Viewing all 163 articles
Browse latest View live

Publish a REST service from PL/SQL to handle HTTP POST requests – using the embedded PL/SQL gateway

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Oracle Database can act as an HTTP server – using the Embedded PL/SQL Gateway (the 10g successor of the MOD_PLSQL gateway). With just a few statements, we can have the Oracle Database become a listener to HTTP requests (GET or POST). When requests are received at the configured host, port and URL, the request is passed to a PL/SQL procedure that handles it and prepares a response.

In this article, we will expose a REST service at URL http://localhost:8080/api/movieevents. This service processes an HTTP POST request that in this case contains a JSON payload. The payload is passed to the PL/SQL procedure to do with as it feels fit.

The implementation takes place in two steps. First, some preparations must be made by the DBA – to make it possible for a particular database schema to handle HTTP requests received on a certain URL. This includes opening up a certain host and port.

First, you may want to set the HTTP port:

select dbms_xdb.gethttpport
from   dual

and if you do not like it, set another one:

EXECUTE dbms_xdb.SETHTTPPORT(8080);

The following statements create the Access Control List that specifies that connection is allowed to database schema WC with HTTP requests to host 127.0.0.1 (aka localhost) and ports between 7000 and 9200:

begin

  dbms_network_acl_admin.create_acl (

    acl             => 'utlpkg.xml',

    description     => 'Normal Access',

    principal       => 'CONNECT',

    is_grant        => TRUE,

    privilege       => 'connect',

    start_date      => null,

    end_date        => null

  );

end;



begin

  dbms_network_acl_admin.add_privilege (

  acl         => 'utlpkg.xml',

  principal     => 'WC',

  is_grant     => TRUE,

  privilege     => 'connect',

  start_date     => null,

  end_date     => null);

  dbms_network_acl_admin.assign_acl (

  acl => 'utlpkg.xml',

  host => '127.0.0.1',

  lower_port => 7000,

  upper_port => 9200);

end;

Next, the DAD is created – linking the URL path segment /api/ to the WC database schema. This means that any HTTP request received at http://localhost:8080/api/XXX is passed to a PL/SQL procedure called XXX :

BEGIN

  DBMS_EPG.create_dad

  ( dad_name => 'restapi'

  , path => '/api/*'

  );

  DBMS_EPG.AUTHORIZE_DAD('restapi','WC');

end;

The next line instructs the Embedded PL/SQL Gateway to return a readable error page whenever a request is not processed correctly:

exec dbms_epg.set_dad_attribute('restapi', 'error-style', 'DebugStyle');

This line associates the database user WC with the restapi url.

EXEC DBMS_EPG.SET_DAD_ATTRIBUTE('restapi', 'database-username', 'WC');

The final aspect of the preparation involves allowing anonymous access – this means that no username and password are required for HTTP calls  handled by the Embedded PL/SQL Gateway. As per Tim Hall’s instructions:

to enable anonymous access to the XML DB repository, the following code creates the “<allow-repository-anonymous-access>” element if it is missing, or updates it if it is already present in the xdbconfig.xml file.

SET SERVEROUTPUT ON

DECLARE

  l_configxml XMLTYPE;

  l_value     VARCHAR2(5) := 'true'; -- (true/false)

BEGIN

  l_configxml := DBMS_XDB.cfg_get();



  IF l_configxml.existsNode('/xdbconfig/sysconfig/protocolconfig/httpconfig/allow-repository-anonymous-access') = 0 THEN

    -- Add missing element.

    SELECT insertChildXML

           (

             l_configxml,

                '/xdbconfig/sysconfig/protocolconfig/httpconfig',

                'allow-repository-anonymous-access',

                XMLType('<allow-repository-anonymous-access xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd">' ||

                         l_value ||

                        '</allow-repository-anonymous-access>'),

                'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"'

              )

    INTO   l_configxml

    FROM   dual;



    DBMS_OUTPUT.put_line('Element inserted.');

  ELSE

    -- Update existing element.

    SELECT updateXML

           (

             DBMS_XDB.cfg_get(),

             '/xdbconfig/sysconfig/protocolconfig/httpconfig/allow-repository-anonymous-access/text()',

             l_value,

             'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"'

           )

    INTO   l_configxml

    FROM   dual;



    DBMS_OUTPUT.put_line('Element updated.');

  END IF;



  DBMS_XDB.cfg_update(l_configxml);

  DBMS_XDB.cfg_refresh;

END;

The database account anonymous also has to be unlocked to truly enable anonymous access:

ALTER USER anonymous ACCOUNT UNLOCK;

 

This completes the preparations. We now have setup a DAD that is associated with the /api/* path in HTTP requests sent to http://localhost:8080/api/*. This DAD hands requests to the WC database schema to be handled. Requests do not have to include username and password.

Now we have to connect to the WC database schema in order to create the PL/SQL procedure that will handle such requests.

create or replace procedure movieevents

( p_json_payload in varchar2 default '{}'

)

is

begin

  htp.p('call received p_json_payload='||p_json_payload);

  htp.p('REQUEST_METHOD='||owa_util.get_cgi_env(param_name => 'REQUEST_METHOD'));

end movieevents;

Between the definition of the DAD, the opening up of the port range and the creation of this procedure, we have completed the setup that will receive and process HTTP POST requests that send a body with any payload to http://localhost:8080/api/movieevents. This call will result in nothing but a simple response that describes in plain text what it received.

This opens up a bridge from any client capable of speaking HTTP to the Database – non transactional, cross firewall and without additional drivers.

Resources

Some resources:

 http://ora-00001.blogspot.com/2009/07/creating-rest-web-service-with-plsql.html

And especially Tim Hall:

http://www.oracle-base.com/articles/10g/dbms_epg_10gR2.php and  http://oracle-base.com/articles/misc/xml-over-http.php

The Oracle documentation: http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28424/adfns_web.htm

On debugging and errorpage:  http://daust.blogspot.com/2008/04/troubleshooting-404-not-found-error-on.html

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Publish a REST service from PL/SQL to handle HTTP POST requests – using the embedded PL/SQL gateway appeared first on AMIS Oracle and Java Blog.


Use DB Vault to protect password strength policy

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Suppose your organization wants to enforce a security policy on database password strength. The DBA’s have implemented a password strength verification function in PLSQL such as the oracle supplied ora12c_strong_verify_function in the DEFAULT profile of the database. There seems no way to get around it at first:

Database account u4 is created:

 

create-user-u4

 

U4 logs in and tries to keep it simple, i.e. the password:

 

u4-cannot-simplify-password

 

That password verification function got in the way. U4 searches for solutions to get around this block and stumbles upon the blog from Steve Karam titled Password Verification Security Loophole where Steve demonstrated that it is possible to enter a weak password when creating a user or altering a password, even when a database password verify plsql function is enforced. The way to accomplish this is to use the special IDENTIFIED BY VALUES clause when running the ALTER USER command:

 

2015-07-19 20_22_46-Untitled - Notepad

 

The reason for this behaviour by oracle database is that the IDENTIFIED BY VALUES clause is followed by a hash encoded password string which cannot (easily) be decoded to the original plaintext password. The password strength rules only apply to the original plaintext password value. The only way to crack a hash would be to feed the hash algorithm with candidate passwords and see if the hashed value matches the encoded password string that is known. In the case of the ALTER USER command that would be unfeasible because where would the Oracle database have to stop trying? The number of candidate passwords is limitless..

Until Oracle decides to disable this feature that allows the pre-cooked-at-home encoded password string to be used, there seems no way to stop users from using the IDENTIFIED BY VALUES clause when they have the privilege to use the ALTER USER command, is there?

In fact there is a way to do anti-featuring. It’s possible in one of my favorite EE options called Database Vault (a seperately licenced product for Oracle Database Enterprise Edition) because it allows us to create our own rules on commands such as ALTER USER on top of required system privileges we would normally need to use the command. If we have the database vault rules enabled we would see following when someone tries to use the IDENTIFIED BY VALUES clause:

u4-cannot-simplify-password-using-identified-by-values-clause

as you can see, IDENTIFIED BY VALUES clause can no longer can be used.
The setup script in Datababase Vault I used is given below and should be run by a database account with at least DV_ADMIN role enabled. Note that individual DV rules are first combined into a DV rule set and then this rule set is used as the command rule for ALTER/CREATE USER & CHANGE PASSWORD. Rules in a rule set will be evaluated either using ALL TRUE or ANY TRUE logic. In my case I needed a mix, therefore I created one DV rule with two checks that were combined using ANY TRUE and a second DV rule to check the sql string. These two DV rules were then put in the DV rule set using ALL TRUE evaluation logic. The ‘Is user allowed or modifying own password’ rule is in fact a copy of an Oracle supplied rule. It checks whether the user has the DV_ACCTMGR role OR whether the user is trying to change his/her own password.

— create DV RULESBEGIN
DVSYS.DBMS_MACADM.CREATE_RULE (
rule_name   => ‘Contains no identified by values clause’,
rule_expr   => ‘UPPER(DVSYS.DV_SQL_TEXT) not like ”%IDENTIFIED BY VALUES%”’);

   DVSYS.DBMS_MACADM.CREATE_RULE (
rule_name   => ‘Is user allowed or modifying own password’,
rule_expr   => ‘DVSYS.DBMS_MACADM.IS_ALTER_USER_ALLOW_VARCHAR(”””||dvsys.dv_login_user||”””) = ”Y” OR DVSYS.dv_login_user = dvsys.dv_dict_obj_name’);
END;
/

— CREATE DV RULESET

BEGIN
DVSYS.DBMS_MACADM.CREATE_RULE_SET (
rule_set_name     =>'(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
description       => ‘rule set for (Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
enabled           => ‘Y’,
eval_options      => ‘1’,
audit_options     => ‘3’,
fail_options      => ‘1’,
fail_message      => ‘IDENTIFIED BY VALUES clause not allowed’,
fail_code         => ‘-20600′,
handler_options   => ‘0’,
handler           => NULL);
END;
/

— ADD RULES TO RULESET

BEGIN
DVSYS.DBMS_MACADM.ADD_RULE_TO_RULE_SET (
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
rule_name       => ‘Contains no identified by values clause2′,
rule_order      => ‘1’,
enabled         => ‘Y’);
DVSYS.DBMS_MACADM.ADD_RULE_TO_RULE_SET (
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
rule_name       => ‘Is user allowed or modifying own password2′,
rule_order      => ‘1’,
enabled         => ‘Y’);
END;
/

— UPDATE COMMAND RULE

BEGIN
DVSYS.DBMS_MACADM.UPDATE_COMMAND_RULE (
command         => ‘CREATE USER’,
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
object_owner    => DBMS_ASSERT.ENQUOTE_NAME (‘%’, FALSE),
object_name     => ‘%’,
enabled         => ‘Y’);
END;
/

BEGIN
DVSYS.DBMS_MACADM.UPDATE_COMMAND_RULE (
command         => ‘ALTER USER’,
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
object_owner    => DBMS_ASSERT.ENQUOTE_NAME (‘%’, FALSE),
object_name     => ‘%’,
enabled         => ‘Y’);
END;
/

BEGIN   DVSYS.DBMS_MACADM.UPDATE_COMMAND_RULE (
command         => ‘CHANGE PASSWORD’,
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
object_owner    => DBMS_ASSERT.ENQUOTE_NAME (‘%’, FALSE),
object_name     => ‘%’,
enabled         => ‘Y’);
END;
/

 

 

NOTES:

  • the password command in sqlplus also seems to be using the IDENTIFIED BY VALUES clause, so using this DV setup would disable that command too

u4-cannot-simplify-password-using-password-command

  • to find out the hash encoded string to be used in IDENTIFIED BY VALUES clause one can easily create a user in a homegrown database (preferably using same version as victim database) and afterwards retrieve the spare4 column value from SYS.USER$ table for that user. Note that the username itself is used in the Oracle algorithm to calculate the hash value so the hash value only works for a user with the same name.
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Use DB Vault to protect password strength policy appeared first on AMIS Oracle and Java Blog.

How to create an if-then-else expression (aka ternary operator) in an XPath 1.0 expression?

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

There are situations where I have to create an XPath expression that performs something like if-then-else logic (similar to a CASE or DECODE expression in SQL or a ternary operator in Java or JavaScript). Unfortunately, XPath 1.0 – a version still widely found – does not support the XPath 2.0 if-then-else logic. So something else is needed.

I encountered a great trick in a forum post; this post also referred to Becker’s Method. In short, the trick uses the fact that the numerical evaluation of (Boolean) true = 1 and of false = 0. It also makes use of the fact that substring( somestring, 1, 0) returns an empty string.

The abstract XPath expression used for

if C1 then R1 else R2

becomes

concat( substring( R1, 1,number (C1) * string-length(R1), substring( R2, 1, number(not(C1)) * string-length(R2)

when R1 and R2 are not of type string, then some type conversion to and from string are required.

An example of using this trick:

In a Mediator component I want to assign the value ‘LONG’ when the input string is longer than 6 characters. When the stringlength is 6 or less, then I want to assign the value not long.

With the ternary expression, this would be something like:

result = (input.length() > 6? ‘LONG’: ‘NOTLONG’)

The corresponding XPath expression is this:

concat(

  substring( ‘LONG’, 1, ( number(string-length($in.payload/client:process/client:input)  > 6)  * 4))

, substring( ‘NOTLONG’, 1, ( number(string-length($in.payload/client:process/client:input)   <=  6)  * 7))

)

image

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post How to create an if-then-else expression (aka ternary operator) in an XPath 1.0 expression? appeared first on AMIS Oracle and Java Blog.

How to use WLST as a Jython 2.7 module

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

WebLogic Scripting Tool (WLST) in WebLogic Server 12.1.3 uses Jython version 2.2.1 (based on Python 2.2.1). This can be an important limitation when using WLST. Many modules are not available for 2.2.1 or are difficult to install. See here for an example. WLST however can be used as a module in Jython 2.7. This allows you to use all kinds of nice Jython 2.7 goodness while still having all the great WLST functionality available.

To just name some nice Jython 2.7 features:

  • pip and easy_install can be used to easily add new modules
  • useful new API’s are available such as xml.etree.ElementTree to allow XML processing, the multiprocessing module to use multiple threads and the argparse module to make parsing of script arguments easy.

In this article I’ll describe how you can use WLST as a Jython 2.7 module in order to allow you to combine the best of both worlds in your scripts.

Ready Jython

First you need to install Jython. You can obtain Jython from: http://www.jython.org/.

Obtain the classpath

In order for WLST as a module to function correctly, it needs its dependencies. Those dependencies are generated by several scripts such as:

  • <WLS_HOME>/wlserver/server/bin/setWLSEnv.sh
  • <WLS_HOME>/oracle_common/common/bin/wlst.sh
  • <WLS_HOME>/osb/tools/configjar/wlst.sh
  • <WLS_HOME>/soa/common/bin/wlst.sh
  • <WLS_HOME>/wlserver/common/bin/wlst.sh

It can be a challenge to abstract the logic used to obtain a complete classpath from those scripts. Why make it difficult for yourself? Just ask WLST:

<WLS_HOME>/soa/common/bin/wlst.sh

This will tell you the classpath for WLST (which in this case includes the SOA Suite WLST commands). Even though this is usually a long list, this is not enough! You also need wlfullclient.jar (see here on how to create it). Also apparently there are some JAR’s which are used but not in the default WLST classpath such as several <WLS_HOME>/oracle_common/modules/com.oracle.cie.* files. Just add <WLS_HOME>/oracle_common/modules/* to the classpath to fix issues like:

java.lang.RuntimeException: java.lang.RuntimeException: Could not find the OffLine WLST class
weblogic.management.scripting.utils.WLSTUtil.getOfflineWLSTScriptPathInternal

You can remove overlapping classpath entries. Since <WLS_HOME>/oracle_common/modules/* is in the classpath, you don’t need to mention individual modules anymore.

Obtain the module path

Jython needs a module path in order to find the modules used by WLST which are hidden in several JAR files. Again, simply ask WLST for it. Start

<WLS_HOME>/soa/common/bin/wlst.sh

And issue the command:

print sys.path

It will give you something like

['.', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/modules/features/weblogic.server.merged.jar/Lib', '__classpath__', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/server/lib/weblogic.jar', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/common/wlst/modules/jython-modules.jar/Lib', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/common/wlst', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/common/wlst/lib', '/home/maarten/Oracle/Middleware1213/Oracle_Home/wlserver/common/wlst/modules', '/home/maarten/Oracle/Middleware1213/Oracle_Home/oracle_common/common/wlst', '/home/maarten/Oracle/Middleware1213/Oracle_Home/oracle_common/common/wlst/lib', '/home/maarten/Oracle/Middleware1213/Oracle_Home/oracle_common/common/wlst/modules', '/home/maarten/Oracle/Middleware1213/Oracle_Home/oracle_common/common/script_handlers', '/home/maarten/Oracle/Middleware1213/Oracle_Home/soa/common/script_handlers', '/home/maarten/Oracle/Middleware1213/Oracle_Home/soa/common/wlst', '/home/maarten/Oracle/Middleware1213/Oracle_Home/soa/common/wlst/lib', '/home/maarten/Oracle/Middleware1213/Oracle_Home/soa/common/wlst/modules']

Interesting to see where Oracle has hidden all those modules. You can add them to the Jython module path by setting the PYTHONPATH variable.

Create a Jython start script

The easiest way to make sure your classpath and Python module path are set prior to executing a script is to create a Jython start script (similar to wlst.sh). My start script looked like:

startjython.sh

export WL_HOME=/home/maarten/Oracle/Middleware1213/Oracle_Home

export CLASSPATH=$WL_HOME/oracle_common/soa/modules/oracle.soa.mgmt_11.1.1/soa-infra-mgmt.jar:$WL_HOME/oracle_common/soa/modules/commons-cli-1.1.jar:$WL_HOME/soa/soa/modules/oracle.soa.mgmt_11.1.1/soa-infra-mgmt.jar:$WL_HOME/soa/soa/modules/commons-cli-1.1.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/fabric-runtime.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/soa-infra-tools.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/tracking-core.jar:$WL_HOME/soa/soa/modules/oracle.soa.workflow_11.1.1/bpm-services.jar:$WL_HOME/soa/soa/modules/chemistry-opencmis-client/chemistry-opencmis-client.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/testfwk-xbeans.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/oracle-soa-client-api.jar:$WL_HOME/soa/soa/modules/oracle.bpm.alm.script-legacy.jar:$WL_HOME/soa/soa/modules/oracle.bpm.bac.script.jar:$WL_HOME/oracle_common/modules/com.oracle.webservices.fmw.wsclient-rt-impl_12.1.3.jar:$WL_HOME/oracle_common/modules/com.oracle.classloader.pcl_12.1.3.jar:$WL_HOME/oracle_common/modules/org.apache.commons.logging_1.0.4.jar:$WL_HOME/oracle_common/modules/org.apache.commons.beanutils_1.6.jar:$WL_HOME/oracle_common/modules/oracle.ucp_12.1.0.jar:$WL_HOME/soa/soa/modules/oracle.rules_11.1.1/rulesdk2.jar:$WL_HOME/soa/soa/modules/oracle.rules_11.1.1/rl.jar:$WL_HOME/oracle_common/modules/oracle.adf.model_12.1.3/adfm.jar:$WL_HOME/oracle_common/modules/oracle.jdbc_12.1.0/ojdbc6dms.jar:$WL_HOME/oracle_common/modules/oracle.xdk_12.1.3/xmlparserv2.jar:$WL_HOME/oracle_common/modules/*:$WL_HOME/jdeveloper/wlserver/lib/wlfullclient.jar:$WL_HOME/oracle_common/soa/modules/oracle.soa.mgmt_11.1.1/soa-infra-mgmt.jar:$WL_HOME/oracle_common/soa/modules/commons-cli-1.1.jar:$WL_HOME/soa/soa/modules/oracle.soa.mgmt_11.1.1/soa-infra-mgmt.jar:$WL_HOME/soa/soa/modules/commons-cli-1.1.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/fabric-runtime.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/soa-infra-tools.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/tracking-core.jar:$WL_HOME/soa/soa/modules/oracle.soa.workflow_11.1.1/bpm-services.jar:$WL_HOME/soa/soa/modules/chemistry-opencmis-client/chemistry-opencmis-client.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/testfwk-xbeans.jar:$WL_HOME/soa/soa/modules/oracle.soa.fabric_11.1.1/oracle-soa-client-api.jar:$WL_HOME/soa/soa/modules/oracle.bpm.alm.script-legacy.jar:$WL_HOME/soa/soa/modules/oracle.bpm.bac.script.jar:$WL_HOME/oracle_common/modules/com.oracle.webservices.fmw.wsclient-rt-impl_12.1.3.jar:$WL_HOME/oracle_common/modules/com.oracle.classloader.pcl_12.1.3.jar:$WL_HOME/oracle_common/modules/org.apache.commons.logging_1.0.4.jar:$WL_HOME/oracle_common/modules/org.apache.commons.beanutils_1.6.jar:$WL_HOME/oracle_common/modules/oracle.ucp_12.1.0.jar:$WL_HOME/soa/soa/modules/oracle.rules_11.1.1/rulesdk2.jar:$WL_HOME/soa/soa/modules/oracle.rules_11.1.1/rl.jar:$WL_HOME/oracle_common/modules/oracle.adf.model_12.1.3/adfm.jar:$WL_HOME/oracle_common/modules/oracle.jdbc_12.1.0/ojdbc6dms.jar:$WL_HOME/oracle_common/modules/oracle.xdk_12.1.3/xmlparserv2.jar

export PYTHONPATH=.:$WL_HOME/wlserver/modules/features/weblogic.server.merged.jar/Lib:$WL_HOME/wlserver/server/lib/weblogic.jar:$WL_HOME/wlserver/common/wlst/modules/jython-modules.jar/Lib:$WL_HOME/wlserver/common/wlst:$WL_HOME/wlserver/common/wlst/lib:$WL_HOME/wlserver/common/wlst/modules:$WL_HOME/oracle_common/common/wlst:$WL_HOME/oracle_common/common/wlst/lib:$WL_HOME/oracle_common/common/wlst/modules:$WL_HOME/oracle_common/common/script_handlers:$WL_HOME/soa/common/script_handlers:$WL_HOME/soa/common/wlst:$WL_HOME/soa/common/wlst/lib:$WL_HOME/soa/common/wlst/modules

/home/maarten/jython2.7.0/bin/jython "$@"
exit $?

You can of course see that the PYTHONPATH is created from some search and replace actions on the output of sys.path executed with WLST. I removed [,] and ‘. Next I replaced , by : and removed the extra spaces after the :. Also I replaced my WL_HOME with a variable just to make the script look nice and more reusable. For a Windows script, the search and replace commands are slightly different such as ; as path separator and set instead of export.

You can use the start script in the same way as the wlst start script. You only have to mind that using WLST as a module requires some minor changes to WLST scripts. See below.

Ready the WLST module

In order to use WLST as a module in Jython 2.7 you need to generate a wl.py file. This is described here. Actually starting wlst.sh and executing: writeIniFile(“wl.py”) is enough.

When using the module though, the following exception is raised:

Traceback (most recent call last):
File "sample.py", line 8, in <module>;
import wl
File "/home/maarten/tmp/wl.py", line 13, in <module>;
origPrompt = sys.ps1
AttributeError: 'reflected field public org.python.core.PyObject o' object has no attribute 'ps1'

WLST apparently has some shell specific prompt handling code. Easy to get rid of this exception though by replacing the following line in wl.py

origPrompt = sys.ps1

With

origPrompt = ">>>"

This origPrompt looks pretty much like my default prompt and I didn’t encounter any errors after setting it like this.

Seeing it work

My directory contains the following script: wl.py. Generated as explained above with origPrompt replaced.

Next my listserver.py script:

import wl

wl.connect("weblogic","Welcome01", "t3://localhost:7101")
mbServers= wl.getMBean("Servers")
servers= mbServers.getServers()
print( "Array of servers: " )
print( servers )
for server in servers :
    print( "Server Name: " + server.getName() )
    print( "Done." )

Because I’m using the WebLogic module, you need to do wl.connect instead of connect and similar for other calls from the wl module. Otherwise you will get exceptions like:

Traceback (most recent call last):
File "listserver.py", line 9, in <module>
connect("weblogic","Welcome01", "t3://localhost:7101")
NameError: name 'connect' is not defined

The output when using my startjython.sh script as explained above:

startjython.sh listserver.py

Connecting to t3://localhost:7101 with userid weblogic ...
Successfully connected to Admin Server "DefaultServer" that belongs to domain "DefaultDomain".

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

Array of servers:
array(weblogic.management.configuration.ServerMBean, [[MBeanServerInvocationHandler]com.bea:Name=DefaultServer,Type=Server])
Server Name: DefaultServer

Done.

Installing the logging module becomes

jython2.7.0/bin/pip install logging
Downloading/unpacking logging
Downloading logging-0.4.9.6.tar.gz (96kB): 96kB downloaded
Running setup.py (path:/tmp/pip_build_maarten/logging/setup.py) egg_info for package logging

Installing collected packages: logging
Running setup.py install for logging

Successfully installed logging
Cleaning up...

And of course using the logger also works.

import logging
logging.basicConfig()
log = logging.getLogger("MyFirstLogger")
log.setLevel(logging.DEBUG)
log.info("That does work =:-)")

Output:

INFO:MyFirstLogger:That does work =:-)
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post How to use WLST as a Jython 2.7 module appeared first on AMIS Oracle and Java Blog.

Highlights and roadmaps from JavaOne 2015 Keynote session

Reflections after JavaOne 2015 – the platform (SE, ME, EE) and the community (me, you and us)

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

In terms of actual news, this year’s JavaOne conference had not very much to offer. It was a party – celebrating 20 years of Java, and at the same time a confirmation of the strength of the Java platform.

clip_image002

Java is reconfirmed as the #1 programming language (both TIOBE index and PYPL index say so) and the evolution of the platform is demonstrated. Java is not the coolest kid on the block – following every trend, leading the hype cycles and offering the latest and greatest and bleedingest edge. Java is solid and evolutionary and enterprise level. While some in the community showed disappointment for this perceived lack of vigor and daring, the majority of attendees seem to appreciate that this steady evolution of Java means that they can carry their skills forward for quite some time to come.

When you think about it, it is quite a feat that companies such as IBM, RedHat and Oracle – for all their own reasons – drive forward the Java platform (programming language, JVM, Enterprise Edition, Embedded) and the community around it. The result of their labors is that I can continue to benefit from Java – even when the world of IT has changed dramatically since Java first came out. As long as I evolve along with Java, I get to benefit from multicore chip architectures, WebSockets and HTTP/2, from REST & JSON, from functional programming paradigms and real time embedded processing and many other recent and upcoming trends and developments. Apparently, what is good for Java is good for IBM and Oracle and also not too bad for me.

clip_image004

The yearly appeal from Oracle – and before that from Sun Microsystems – to the community to join in and help further Java struck home with me for perhaps the first time. Oracle – and the other major parties involved – have a limited number of resources. They are not all knowing either. Opinions, experiences, suggestions and contributions to specification designs really can make a difference. With all the flaws in the process and the political machinations around it, the evolution of the biggest programming language on the planet is still pretty much done in the open – and it is quite easy to be a part of that. Having influence is not that hard; with only a limited number of voices joining in the discussion, I – and you as well – and join in easily and be heard. Complaining about the perceived lack of progress of aspects of Java is not your only option – you can be a part of it. It is not some sort of hallowed forum that thinks about the next steps – well, maybe it is, but that is not all there is. I feel now, more than ever, that I can make contribution.

Java SE

Apart from many talks on Java 8 features – such as lambda expressions and stream processing and the Nashorn JavaScript engine integrated with the JVM – the main focus on this year’s conference was on Java 9 Modularity (aka project JigSaw).

Modularity is first of all about the Java runtime. Its aim: making it possible to pick only the pieces of the JRE your application need. So for example a tiny JRE can be embedded with an application. Part of [the complexity of this]modularity is the legacy the Java platform is carrying around. The platform is 20 years and has always had backwards compatibility. Even to technologies that are now really outdated or ideas that seemed like good ones a long time ago. An overhaul of the Java platform is not easy, because of the size, the complexity and the fact that compatibility has to be respected – at least to some point

Modularity was originally planned for Java SE 7 but could not be completed in time and nor could it for Java SE 8. It simply was too hard. During this more than 5 year period, a lot working has been done in terms of cleaning up the core libraries and figuring out a way to get the modularity covered. With Java SE 9 now around the corner, the dust has settled and project JigSaw is close to delivering on the promises – including right sized JRE and the end of Classpath-hell.

The concept of a module is introduced into Java. Modules explicitly describe what they offer (their public API), and what they need (dependencies), hence dependencies can be verified and resolved automatically during all phases of development. For those of you familiar with SCA (Service Component Architecture): a module seems very similar to a service component.

Modules bundle together one or more packages and offer stronger encapsulation than jars. Each module specifies dependencies on other [higher level]modules. By default, all classes/types inside a module are hidden to the outside world. Each module exposes specific packages for external consumption. The module system also includes a services layer that can bind service providers and consumers through interfaces. Think of it as inversion of control where the module system fulfills the role of service registry.

clip_image006

Modularity will affect custom applications created on the Java platform, and the way that they are built and delivered. It will also have a large impact on the Java platform itself.

The Java SE 9 Platform Specification will divide the platform into a set of modules. An implementation of the Java SE 9 Platform can contain all of the platform modules or, possibly, just some of them. The only module known specifically to the module system is the base module, which is named java.base. The base module defines and exports all of the platform’s core packages, including the module system itself. The base module is always present. Every other module depends implicitly upon the base module, while the base module depends upon no other modules. The remaining platform modules will share the “java.” name prefix and are likely to include, e.g., java.sql for database connectivity, java.xml for XML processing, and java.logging for logging.

Some other new features in Java SE 9 and JDK 9 that got some air time are:

• HTTP/2 and WebSocket support (JEP 110)

• Light Weight JSON API (JEP 198)

• Money and Currency API (JSR 354)

• Common Logging System for all JVM components (JEP 158)

• Improved Lock Contention mechanism (JEP 143)

• Segmented Code Cache (JEP 197)

• Datagram Transport Layer Security (DTLS) (JEP 219)

• Stack-Walking API (JEP 259)

• JShell – the interactive Java language shell (project Kulla)

• JavaDoc.Next

– HTML 5 (JEP 224)

– Simplified Doclet API (for plugins into JavaDoc generator)

– JavaDoc Search

• Finalize Project Coin (JSR 334, JEP 213)

• Private interface methods

Note that Java SE 9 is feature complete in December 2015. General availability is scheduled for September 2016, right before next year’s JavaOne conference. OpenJDK builds for release 9 are available for download and experimentation: http://openjdk.java.net/projects/jdk9/. The roadmap is shown below.

clip_image008

Two special projects were highlighted during the JavaOne opening keynote session – Valhalla and Panama. Both demonstrate the ongoing evolution of the Java platform. Both are long term projects on which various large Java stakeholders collaborate, to ensure the continued relevance of Java. They are for the greater good – if maybe not at the shiny frontier of latest application development.

Project Valhalla reassess Java – at a rather fundamental level – in light of modern hardware: When Java was first designed – CPU architectures were very different from today. The numbers of cores and threads were far smaller and for example in the mid ‘90s, a memory fetch was about as expensive as a calculation operation. Today, memory fetches can be more than a 100 times as expensive. Project Valhalla looks to improve the JVM to leverage current hardware.

In this particular example, a special type of Class is considered called value class – code like an object (but no inheritance), perform like a primitive. Memory fetches are frequently done in Java when objects are retrieved. When an object is really nothing but a value holder, having to treat it like an object with a pointer to a memory address for accessing it becomes quite expensive. With value types, most of this overhead goes away. Other features under consideration are Reified Generics (retaining their actual type at runtime), Generic Specialization (List<int> would be valid & highly efficient) and ‘volatile’ enhancements.

Project Panama (http://openjdk.java.net/projects/panama/) deals with the Foreign Function Interface that allows external, non-Java libraries (such as DLLs on the Windows platform) to be integrated into Java programs. Today, such integration is supported – but it is hard and painful. Project Panama will make such interaction much smoother and efficient.

Java EE

Java EE 7 was released back in 2013. Only this year did Oracle WebLogic achieve full Java EE 7 compliance with the 12.2.1 release. At the same, the first few EE 7 APIs were implemented in WebLogic quite some time ago. Here is a trend that will continue with Java EE 8 – and not just with Oracle WebLogic: Java EE application servers will take on some new APIs when these become available – and not wait for the full Java EE 8 release (Spring 2017). Full Java EE 8 compatibility can take considerable time though.

The Java EE 7 platform is fully supported at this point by IBM WebSphere, TmaxSoft JEUS, RedHat WildFly, Cosminexus Hitachi Application Server and Oracle’s Glassfish and WebLogic Server.

clip_image010

Java EE 8 seemed to have cloud as a major theme – the way it was presented a couple of years back. And especially standardization of cloud platform management APIs. That theme seems no longer to apply. Another theme was ‘Project Avatar’ and special support for HTML5 and rich client web development. That too seems to have vanished. At this point, there is no real core theme with Java EE 8. There is evolution in most APIs – for example around new adjacent technologies such as HTTP/2 and Java SE – and there are some new entirely APIs (Model-View Controller, Java EE Security). And there is one API that just missed the boat for Java EE 7 (JCache).

The latest release of GlassFish 4 – the reference implementation of Java EE 7 – was published in October 2015. Updates to the Java EE 7 specifications for JAX-RS, JMS, CDI and WebSocket were absorbed in this release. Most components were updated in some way – primariuy with fixes and security updates.

The reference implementation for Java EE 8 will be GlassFish 5. Early builds can be downloaded and tried out from http://download.oracle.com/glassfish/5.0. Java EE 8 – and GlassFish 5 – is slated for Spring 2017.

clip_image012

Java ME Embedded

The really large numbers with Java arise when talking about the number of devices that run Java. Most of them – billions actually – are pretty small devices that run the small edition of the Java platform, called Java ME Embedded. These devices are nowadays typically equated to the things in the Internet of Things. That is one of the key themes from Oracle with Java ME Embedded: it helps power the intelligent edge of the internet of things.

clip_image014

Devices range from very small to quite big. They measure and sense and collect data to be reported to higher up. And the smarter they are, the more preprocessing – like filtering and aggregation – they can do, preventing the chain from being needlessly overwhelmed. They also display and actuate, based on signals received down the command chain (that is – from right to left in the overhead illustration). The smaller the device, the smaller the Java runtime available for creating local smartness. And that is where Java ME Embedded comes in: a small footprint, yet a proper (in step with Java SE) Java platform.

clip_image016

And with upcoming releases, Java ME Embedded Platform will grow even closer to the modular Java SE 9 platform. The roadmap for Java ME Embedded looks as follows:

clip_image018

The rapid evolution is obvious, as well as some of the themes: software provisioning, memory usage, security, specialized platform support and IoT. A number of Oracle specific elements are part of this roadmap – Developer Cloud Service, IoT Cloud Service.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Reflections after JavaOne 2015 – the platform (SE, ME, EE) and the community (me, you and us) appeared first on AMIS Oracle and Java Blog.

as_json: Relational to JSON in Oracle Database

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Some time ago I noticed this blog from Dan McGhan. In that blog he compares several ways to generate JSON from relational data in a Oracle Database.
I had some spare time, so I tried my own JSON generator, build around 3 nested Oracle types, on the examples he used.
I had no problem generating the exact output he wanted, but the performance was a bit disappointing (almost the same as the PL/JSON solution).
But, as I had some spare time, I started trying to improve the performance. But after a few tries it turned out that I was lucky to get a result at all.
My JSON generator had a gigantic memory leak. I not sure if it’s because of Oracle’s implementation of the nested types I used

create or replace type jd authid current_user as object
( json_type varchar2(1 char)
) not final;
/

create or replace type ja as table of jd;
/

create or replace type jv authid current_user under jd
( .... )

but using this script is enough to crash my 12.1.02.0 database

declare
  x jv;
begin
  for i in 1 .. 12
  loop
    x := jv( 'a', jv( 'b', jv( 'x' ) )
           , 'a', jv( 'b', jv( 'x' ) )
           );
  end loop;
end;
/

Crashed

Anyway, I had some spare time and turned my JSON generator in a one package implementation. And using that package I can create the same JSON as Dan is creating in with PL/JSON and APEX_JSON.
But with performance a bit better than the APEX_JSON solution.

declare
--
  function get_dept_as_json( p_dept_id number )
  return clob
  is
    cursor c_dept
    is
      select d.department_id
           , d.department_name
           , l.location_id
           , l.street_address
           , l.postal_code
           , c.country_id
           , c.country_name
           , c.region_id
           , m.employee_id
           , m.first_name || ' ' || m.last_name manager_name
           , m.salary
           , j.job_id
           , j.job_title
           , j.min_salary
           , j.max_salary
      from departments d
         , locations l
         , countries c
         , employees m
         , jobs j
      where d.department_id = p_dept_id
      and   l.location_id = d.location_id
      and   c.country_id = l.country_id
      and   m.employee_id (+) = d.manager_id
      and   j.job_id (+) = m.job_id;
    r_dept c_dept%rowtype;
    l_jv as_json.tp_json_value;
    l_emps as_json.tp_json_value;
    l_hist as_json.tp_json_value;
    l_date_format varchar2(20) := 'DD-MON-YYYY';
    l_rv clob;
  begin
    open c_dept;
    fetch c_dept into r_dept;
    close c_dept;
--
    dbms_lob.createtemporary( l_rv, true, dbms_lob.call );
    l_jv := as_json.json( 'id', as_json.jv( r_dept.department_id )
              , 'name', as_json.jv( r_dept.department_name )
              , 'location', as_json.json( 'id', as_json.jv( r_dept.location_id )
                              , 'streetAddress', as_json.jv( r_dept.street_address )
                              , 'postalCode', as_json.jv( r_dept.postal_code )
                              , 'country', as_json.json( 'id', as_json.jv( r_dept.country_id )
                                             , 'name', as_json.jv( r_dept.country_name )
                                             , 'regionId', as_json.jv( r_dept.region_id )
                                             )
                              )
              , 'manager', as_json.json( 'id', as_json.jv( r_dept.employee_id )
                             , 'name', as_json.jv( r_dept.manager_name )
                             , 'salary', as_json.jv( r_dept.salary )
                             , 'job', as_json.json( 'id', as_json.jv( r_dept.job_id )
                                        , 'title', as_json.jv( r_dept.job_title )
                                        , 'minSalary', as_json.jv( r_dept.min_salary )
                                        , 'maxSalary', as_json.jv( r_dept.max_salary )
                                        )
                             )
              );
--
    for r_emp in ( select e.employee_id
                        , e.first_name || ' ' || e.last_name name
                        , e.hire_date
                        , e.commission_pct
                   from employees e
                   where e.department_id = r_dept.department_id
                 )
    loop
      l_hist := null;
      for r_hist in ( select h.job_id
                           , h.department_id
                           , h.start_date
                           , h.end_date
                      from job_history h
                      where h.employee_id = r_emp.employee_id
                    )
      loop
        l_hist := as_json.add_item( l_hist
                          , as_json.json( 'id', as_json.jv( r_hist.job_id )
                                , 'departmentId', as_json.jv( r_hist.department_id )
                                , 'startDate', as_json.jv( r_hist.start_date, l_date_format )
                                , 'endDate', as_json.jv( r_hist.end_date, l_date_format )
                                )
                          );
      end loop;
      l_emps := as_json.add_item( l_emps
                        , as_json.json( 'id', as_json.jv( r_emp.employee_id )
                              , 'name', as_json.jv( r_emp.name )
                              , 'isSenior', as_json.jv( r_emp.hire_date < to_date( '01-jan-2005', 'dd-mon-yyyy' ) )
                              , 'commissionPct', as_json.jv( r_emp.commission_pct )
                              , 'jobHistory', l_hist
                              )
                        );
    end loop;
    as_json.add_member( l_jv, 'employees', l_emps );
--
    l_rv := as_json.stringify( l_jv );
    as_json.free;
    return l_rv;
  end;
begin
  dbms_output.put_line( get_dept_as_json( 10 ) );
end;

Anton

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post as_json: Relational to JSON in Oracle Database appeared first on AMIS Oracle and Java Blog.

Consuming a REST service from your ADF 12.2.1 application

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

With the release of ADF 12.2.1 in the fall of 2015, Oracle finally added support for declaratively consuming a RESTful web service that responds in JSON format. Until then, processing JSON from a rest web service was only possible using Java.
The ability to consume and process the web service response declaratively enables fast implementation in an application that required this, and in this post, I will show how you can use this feature, with the help of a demo application.

The RESTful web service to consume

I will not go into the details of the definition of REST web services or JSON, but start immediately with an example. For this example I will use a service, exposed from the Google Geocoding API. The service can be invoked by an HTTP request, and responds with JSON.
The URL to be invoked is: http://maps.googleapis.com/maps/api/geocode/json
The service request takes one parameter, named “address”. The address can contain any address you can imagine, and when you invoke the service, the response will contain the geographical longitude and latitude of this location, and the postal code, if available. For the demo application I set the retrieval of the postal code as the goal.

Let us see how a typical response looks like, by invoking the service from the browser and putting the following request in the address bar of it:
http://maps.googleapis.com/maps/api/geocode/json?address=”Edisonbaan 15 Nieuwegein”

The browser will execute a GET request and the the result will be:

{
   "results" : [
      {
         "address_components" : [
            {
               "long_name" : "15",
               "short_name" : "15",
               "types" : [ "street_number" ]
            },
            {
               "long_name" : "Edisonbaan",
               "short_name" : "Edisonbaan",
               "types" : [ "route" ]
            },
            {
               "long_name" : "Nieuwegein",
               "short_name" : "Nieuwegein",
               "types" : [ "locality", "political" ]
            },
            {
               "long_name" : "Nieuwegein",
               "short_name" : "Nieuwegein",
               "types" : [ "administrative_area_level_2", "political" ]
            },
            {
               "long_name" : "Utrecht",
               "short_name" : "UT",
               "types" : [ "administrative_area_level_1", "political" ]
            },
            {
               "long_name" : "Nederland",
               "short_name" : "NL",
               "types" : [ "country", "political" ]
            },
            {
               "long_name" : "3439 MN",
               "short_name" : "3439 MN",
               "types" : [ "postal_code" ]
            }
         ],
         "formatted_address" : "Edisonbaan 15, 3439 MN Nieuwegein, Nederland",
         "geometry" : {
            "location" : {
               "lat" : 52.0334908,
               "lng" : 5.099067900000001
            },
            "location_type" : "ROOFTOP",
            "viewport" : {
               "northeast" : {
                  "lat" : 52.03483978029151,
                  "lng" : 5.100416880291503
               },
               "southwest" : {
                  "lat" : 52.03214181970851,
                  "lng" : 5.097718919708499
               }
            }
         },
         "place_id" : "ChIJ2X-AAEpkxkcRnIHiNNbIzHM",
         "types" : [ "street_address" ]
      }
   ],
   "status" : "OK"
}

In general, when you develop a client for a RESTful web service, you will probably prefer to use a tool like Postman, which lets you invoke other operations than the GET of the browser, but for this example we only use the GET request, and therefore the browser is sufficient.

From the response, we can read now the latitude and longitude of our address, and also the postal code, which is in line 36.

Creating the data control

Now, let us see how we can perform this call and process the response from ADF. In a new ADF application, we choose to create a Web Service Data Control (SOAP/REST) from the gallery:

Creating a new Web Service Data Control

1. Creating a new Web Service Data Control

After clicking “OK”, we enter the Web Service Data Control wizard, where we give the data control a name, choose REST and the type of data control. The first option is to use an ADF based REST service, which is one that is based on ADF Business Components. You can see examples of this in this post.
For consuming the Geocode web service, however, we will need to use the second option: Generic Data Control.

2. Creating Web Service Data Control wizard, step 1 of 5

2. Creating Web Service Data Control wizard, step 1 of 5

With the green plus we can create a new REST connection from here.

3. Creating REST connection

3. Creating REST connection

We fill in a name for the REST connection and enter the base URI, but without the last part of it: “json”. We leave that part for some next step. Note that, when you click the “Test Connection” button, you will see an error, but this is not fatal. We can go on with the wizard, so, after clicking OK we get into step 2 of the wizard, which we can skip, because we don’t implement any authentication here.
So we get into part three of the wizard, where we are supposed to enter a “Resource Path”.

4. Creating Web Service Data Control setp 3 of 5

4. Creating Web Service Data Control setp 3 of 5

With the green plus we can create a new resoruce path, which we assign to the the last part of our base URI, that we have left out in the previous step: “/json”. For the data format, we choose JSON and select the “GET” method and give it a name: “get” for example. Then we click next to go to step 4 of the wizard.

5. Creating Web Service Data Control step 4 of 5

5. Creating Web Service Data Control step 4 of 5

Here we see the method that we have just created “get”. After we click on it, we can choose either a schema file or a sample. Since we have no schema file available, we choose “Parse from Sample Code” and paste the response we got earlier into the box. Further on, we add the address URL parameter.
Now we click “Next” and this will bring us to the last screen of the wizard, where we can test the data control, and this should give us the “Connection Successful” message, like below.

6. Creating Web Service Data Control step 5 of 5

6. Creating Web Service Data Control step 5 of 5

Now our data control is ready to use. Let us see how it looks like:

Geocode data control

Geocode data control

We see the “get”-method that we have defined, which takes one parameter: “address”. The method returns a status and a “results”-collection, which on its turn contains the “address_components”-collection where one of its members will hold the postal code that we are looking for.

Creating a page declaratively

Let’s make a simple page, where we will just display the list of address components, declaratively. First we drag the “get” method from the data controls section onto the page, where we choose “ADF Parameter Form” from the context menu. We give the input field the label “Address” so it will be clear what needs to be entered here. The submit button will get the label “get” by default, but we can change that of course.
After that, we drag the “address_components”-collection onto the page, and choose “ADF Table”, which we will make read only.
One more thing we need to do is to set the PartialTriggers property on the table to the button, so that the response will be displayed immediately, and when we run the page, the result will be as follows:

Test Page on REST service, created declaratively

Test Page on REST service, created declaratively

Now we enter some address in the address field, for instance “Edisonbaan 15 Nieuwegein” and press the “get” button. The result will be:

Test Page with the service call executed

Test Page with the service call executed

So, as you can see, we have the postal code that we were looking for, in the last row of the table.

We can conclude from this, that it is easy to make a simple page based on a RESTful service. But what if we want to display just the postal code in an extra field?
This can be done, but with some Java code, as I will show below.

Reading data from the response programmatically

We will now add an extra input field, that we will put in disabled mode, that will contain the postal code, as retrieved from the Geocode service.
First we drag the input text component from the component palette onto the page and give it the appropriate label. For the value property, we invoke the Expression Builder, which will lets us create a managed bean, when we select the ADF Managed Beans node.

Creating managed bean

Creating managed bean

We call this bean the geocodeBean and give it view scope.

After that we can create the property for postal code and set this property on the value of our input field. We also add the button to the PartialTriggers property.

Creating postal code property

Creating postal code property

Then we change the get button’s action listener property to a new managed bean method, called “findPostalCode”.

Creating new action listener for button

Creating new action listener for button

Then we add some code to this method:

    public void findPostalCode(ActionEvent actionEvent) {
        BindingContext bindingContext = BindingContext.getCurrent();
        BindingContainer bindings = bindingContext.getCurrentBindingsEntry();
        OperationBinding operationBinding = bindings.getOperationBinding("get");
        operationBinding.execute();

        DCIteratorBinding addressComponentsIterator =
             ((DCBindingContainer) bindings).findIteratorBinding("address_componentsIterator");
        addressComponentsIterator.setRangeSize(-1);
        Row[] rows = addressComponentsIterator.getAllRowsInRange();

        if (rows != null) {
            Optional postalCode =
            Arrays.stream(rows)
            .map(e -> (DCDataRow) e)
            .map(e -> (Map) e.getDataProvider())
            .filter(map -> ((List) map.get("types")).contains("postal_code"))
            .map(e -> (String) e.get("long_name"))
            .map(e -> {return (e == null ? "Nothing found" : e);})
            .findFirst()
            ;
          postalCode.ifPresent(this::setPostalCode);
        }
    }

Here you can see some Java 8 style code to iterate over the address components and look for the component that has a types array that contains the text “postal_code”.

We can now make the table with address components invisible, but we need to keep the binding available.

Let us first run the page to see if it works and after that look more closely to the code.

We run the page, type in the address and after pressing the “get” button we see the postal code appear in the field below.

Page with postal code after REST call has been executed

Page with postal code after REST call has been executed

In the code you see that the “get”-method is executed and after that the rows from the address-components iterator are caught into a Row[] array.
If we cast each Row to DCDataRow, we are able to get the underlying data provider with the method “getDataProvider()”. This will give us either a LinkedHashMap for an object, or an ArrayList for an array.

By creating a Java stream from this array, we are able to filter and map its elements using lambda expressions, and retrieve the element that we are looking for.

Conclusion

Let me conclude that Oracle made it easy in the ADF 12.2.1 release to consume a REST web service that produces JSON, declaratively.
With the help of the binding layer or ADF Model, we can also process the data programmatically with Java, without having to parse JSON, because the data is automatically converted into Java object structures.

You can download the demo application here.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Consuming a REST service from your ADF 12.2.1 application appeared first on AMIS Oracle and Java Blog.


Doing performance measurements of an OSB Proxy Service by programmatically extracting performance metrics via the ServiceDomainMBean and presenting them as an image via a PowerPoint VBA module

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

This article explains how the process of doing performance measurements of an OSB Proxy Service and presenting them in a “performance analysis document” was partly automated. After running a SoapUI based Test Step (sending a request to the service), extracting the service performance metrics was done by using the ServiceDomainMBean in the public API of the Oracle Service Bus. These service performance metrics can be seen in the Oracle Service Bus Console via the Service Monitoring Details. Furthermore this article explains how these service performance metrics are used by a PowerPoint VBA module and slide with placeholders, to generate an image, using injected service performance metric values. This image is used to present the measurements in a “performance analysis document”.


Performance issues

In a web application we had performance issues in a page where data was being shown that was loaded using a web service (deployed on Oracle Service Bus 11gR1). In the web page, an application user can fill in some search criteria and when a search button is pressed, data is being retrieved (from a database) , via the MyProxyService, and shown on the page in table format.

Web application

Performance analysis document

Based on knowledge about the data, the business owner of the application, put together a number of test cases that should be used to do performance measurements, in order to determine if the performance requirements are met. All in all there were 9 different test cases. For some of these test cases, data was being retrieved for example concerning a period of 2 weeks and for others a period of 2 months.

Because it was not certain what caused the lack of performance, besides the front-end, also the back-end OSB Proxy Service was to be investigated and performance measurement results were to be documented (in the “performance analysis document ”). It was known from the start that once the problem was pinpointed and a solution was chosen and put in place, again performance measurements should be carried out and the results were again to be documented.

The “performance analysis document ” is the central document, used by the business owner of the application and a team of specialists, to be the basis for choosing solutions for the lack of performance in the web page. It contains an overview of all the measurements that were done (front-end and also the back-end), used software, details about the services in question, performance requirements, an overview of the test cases that were used, a summary, etc.

Because a picture says more than a thousand words, in the “performance analysis document”, the OSB Proxy Service was represented as shown below (the real names are left out). For each of the 9 test case’s such a picture was used.

Picture used in the performance analysis document

The OSB Proxy Service (for this article renamed to MyProxyService) contains a Request Response Pipeline with several Stages, Pipeline Pairs, a Route and several Service Callouts. For each component a response time is presented.

Service Monitoring Details

In the Oracle Service Bus Console, Pipeline Monitoring was enabled (at Action level or above) via the Operational Settings | Monitoring of the MyProxyService.

Enabled Pipeline Monitoring

Before a test case was started in the Oracle Service Bus Console, the Statistics of the MyProxyService where reset (by hand).

All the 9 test cases (requests with different search criteria) were set up in SoapUI, in order to make it easy to repeat them. To get average performance measurements, per test case, a total of 5 calls (requests) were executed. For the MyProxyService, the results of these 5 calls, were investigated in the Oracle Service Bus Console via the Service Monitoring Details.

Service Monitoring Details

In the example shown above, based on the message count of 5, the overall average response time is 820 msecs. The Service Metrics tab displays the metrics for a proxy service or a business service. The Pipeline Metrics tab (only available for proxy services) gives information on various components of the pipeline of the service. The Action Metrics tab (only available for proxy services) presents information on actions in the pipeline of the service, displayed as a hierarchy of nodes and actions.

At first the Service Monitoring Details (of the Oracle Service Bus Console) for a particular test case were copied by hand into a PowerPoint slide and from there a picture was created, that was then copied in to the “performance analysis document” at the particular test case paragraph.

Because of the number of measurements that had to be made for the “before situation” and the “after situation” (when the solution was put in place), it was decided to partly automate this process. Also with future updates in mind of the MyProxyService code, it was anticipated that after each update, the performance measurements for the 9 test cases were to be carried out again.

Overview of the partly automated process

Overview In the partly automated process, an image is derived from a PowerPoint slide and a customized VBA module. Office applications such as PowerPoint have Visual Basic for Applications (VBA), a programming language that lets you extend those applications. The VBA module reads data from a text file (MyProxyServciceStatisticsForPowerpoint.txt) and replaces certain text frames (placeholders, for example CODE_Enrichment_request_elapsed-time) on the slide, with data from the text file and in the end exports the slide to an image (png file). The image can then easily be inserted in the “performance analysis document” at the particular test case paragraph.

Text frame with placeholder CODE_Enrichment_request_elapsed-time Injected service performance metric values ==> Text frame with injected value for placeholder CODE_Enrichment_request_elapsed-time

 

To create the text file with service monitoring details, the JMX Monitoring API was used. For details about this API see:

Java Management Extensions (JMX) Monitoring API in Oracle Service Bus (OSB)

ServiceDomainMBean

I will now explain a little bit more about the ServiceDomainMBean and how it can be used.

The public JMX APIs are modeled by a single instance of ServiceDomainMBean, which has operations to check for monitored services and retrieve data from them. A public set of POJOs provide additional objects and methods that, along with ServiceDomainMbean, provide a complete API for monitoring statistics.

There also is a sample program in the Oracle documentation (mentioned above) that demonstrates how to use the JMX Monitoring API.

Most of the information that is shown in the Service Monitoring Details page can be retrieved via the ServiceDomainMBean. This does not apply to the Action Metrics (unfortunately). The POJO object ResourceType represents all types of resources that are enabled for service monitoring. The four enum constants representing types are shown in the following table:

 

Service Monitoring Details tab

ResourceType enum

Description

Service Metrics

SERVICE

A service is an inbound or outbound endpoint that is configured within Oracle Service Bus. It may have an associated WSDL, security settings, and so on.

Pipeline Metrics

FLOW_COMPONENT

Statistics are collected for the following two types of components that can be present in the flow definition of a proxy service.

· Pipeline Pair node

· Route node

Action Metrics

Operations

WEBSERVICE_OPERATION

This resource type provides statistical information pertaining to WSDL operations. Statistics are reported for each defined operation.

URI

This resource type provides statistical information pertaining to endpoint URI for a business service. Statistics are reported for each defined Endpoint URI.

 

Overview of extracting performance metrics and using them by a PowerPoint VBA module

Based on the above mentioned sample program, in Oracle JDeveloper a customized program was created to retrieve performance metrics for the MyProxyService, and more specifically for a particular list of components (“Initialization_request”, “Enrichment_request”, “RouteToADatabaseProcedure”, “Enrichment_response”, “Initialization_response”). Also an executable jar file MyProxyServiceStatisticsRetriever.jar was created via an Deployment Profile. The program creates a text file MyProxyServiceStatistics_2016_01_14.txt with the measurements and another text file MyProxyServciceStatisticsForPowerpoint.txt with specific key value pairs to be used by the PowerPoint VBA module.

Because the measurements had to be carried out on different WebLogic domains, a batch file MyProxyServiceStatisticsRetriever.bat was created where the domain specific connection credentials can be passed in as program arguments.

Conclusion

After analyzing the measurement, it became obvious that the performance lack was caused mainly by the call to the database procedure via RouteToADatabaseProcedure. So a solution was put in place whereby a caching mechanism (of pre-aggregated data) was used.

Keep in mind that with regard to Action Metrics, statistics can’t be gathered and with regard to Pipeline Metrics only Pipeline Pair node and Route node statistics can be gathered (via the ServiceDomainMBean). Luckily, in my case, the main problem was in the Route node so the ServiceDomainMBean could be used in a meaningful way.

It proved to be a good idea to partly automate the process of doing performance measurements and presenting them, because it saved a lot of time, due to the number of measurements that had to be made.

MyProxyServiceStatisticsRetriever.bat

D:\Oracle\Middleware\jdk160_24\bin\java.exe -classpath "MyProxyServiceStatisticsRetriever.jar;D:\Oracle\Middleware\wlserver_10.3\server\lib\weblogic.jar;D:\OSB_DEV\FMW_HOME\Oracle_OSB1\lib\sb-kernel-api.jar;D:\OSB_DEVbbbbb\FMW_HOME\Oracle_OSB1\lib\sb-kernel-impl.jar;D:\OSB_DEV\FMW_HOME\Oracle_OSB1\modules\com.bea.common.configfwk_1.7.0.0.jar" myproxyservice.monitoring.MyProxyServiceStatisticsRetriever "appserver01" "7001" "weblogic" "weblogic" "C:\temp"

MyProxyServiceStatisticsRetriever.java

package myproxyservice.monitoring;


import com.bea.wli.config.Ref;
import com.bea.wli.monitoring.InvalidServiceRefException;
import com.bea.wli.monitoring.MonitoringException;
import com.bea.wli.monitoring.MonitoringNotEnabledException;
import com.bea.wli.monitoring.ResourceStatistic;
import com.bea.wli.monitoring.ResourceType;
import com.bea.wli.monitoring.ServiceDomainMBean;
import com.bea.wli.monitoring.ServiceResourceStatistic;
import com.bea.wli.monitoring.StatisticType;
import com.bea.wli.monitoring.StatisticValue;

import java.io.File;
import java.io.FileWriter;
import java.io.IOException;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;

import java.net.MalformedURLException;

import java.text.SimpleDateFormat;

import java.util.Arrays;
import java.util.Date;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.List;
import java.util.Map;
import java.util.Properties;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;

import weblogic.management.jmx.MBeanServerInvocationHandler;


public class MyProxyServiceStatisticsRetriever {
    private ServiceDomainMBean serviceDomainMbean = null;
    private String serverName = null;
    private Ref[] proxyServiceRefs;
    private Ref[] filteredProxyServiceRefs;

    /**
     * Transforms a Long value into a time format that de Service Bus Console also uses (x secs y msecs).
     */
    private String formatToTime(Long value) {
        Long quotient = value / 1000;
        Long remainder = value % 1000;

        return Long.toString(quotient) + " secs " + Long.toString(remainder) +
            " msecs";
    }

    /**
     * Transforms a Long value into a time format that de Service Bus Console also uses (x secs y msecs).
     */
    private String formatToTimeForPowerpoint(Long value) {
        Long quotient = value / 1000;
        Long remainder = value % 1000;

        return Long.toString(quotient) + "secs" + Long.toString(remainder) +
            "msecs";
    }

    /**
     * Gets an instance of ServiceDomainMBean from the weblogic server.
     */
    private void initServiceDomainMBean(String host, int port, String username,
                                        String password) throws Exception {
        InvocationHandler handler =
            new ServiceDomainMBeanInvocationHandler(host, port, username,
                                                    password);

        Object proxy =
            Proxy.newProxyInstance(ServiceDomainMBean.class.getClassLoader(),
                                   new Class[] { ServiceDomainMBean.class },
                                   handler);

        serviceDomainMbean = (ServiceDomainMBean)proxy;
    }

    /**
     * Invocation handler class for ServiceDomainMBean class.
     */
    public static class ServiceDomainMBeanInvocationHandler implements InvocationHandler {
        private String jndiURL =
            "weblogic.management.mbeanservers.domainruntime";
        private String mbeanName = ServiceDomainMBean.NAME;
        private String type = ServiceDomainMBean.TYPE;

        private String protocol = "t3";
        private String hostname = "localhost";
        private int port = 7001;
        private String jndiRoot = "/jndi/";

        private String username = "weblogic";
        private String password = "weblogic";

        private JMXConnector conn = null;
        private Object actualMBean = null;

        public ServiceDomainMBeanInvocationHandler(String hostName, int port,
                                                   String userName,
                                                   String password) {
            this.hostname = hostName;
            this.port = port;
            this.username = userName;
            this.password = password;
        }

        /**
         * Gets JMX connection
         */
        public JMXConnector initConnection() throws IOException,
                                                    MalformedURLException {
            JMXServiceURL serviceURL =
                new JMXServiceURL(protocol, hostname, port,
                                  jndiRoot + jndiURL);
            Hashtable h = new Hashtable();

            if (username != null)
                h.put(Context.SECURITY_PRINCIPAL, username);
            if (password != null)
                h.put(Context.SECURITY_CREDENTIALS, password);

            h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                  "weblogic.management.remote");

            return JMXConnectorFactory.connect(serviceURL, h);
        }

        /**
         * Invokes specified method with specified params on specified
         * object.
         */
        public Object invoke(Object proxy, Method method,
                             Object[] args) throws Throwable {
            if (conn == null)
                conn = initConnection();

            if (actualMBean == null)
                actualMBean =
                        findServiceDomain(conn.getMBeanServerConnection(),
                                          mbeanName, type, null);

            return method.invoke(actualMBean, args);
        }

        /**
         * Finds the specified MBean object
         *
         * @param connection - A connection to the MBeanServer.
         * @param mbeanName  - The name of the MBean instance.
         * @param mbeanType  - The type of the MBean.
         * @param parent     - The name of the parent Service. Can be NULL.
         * @return Object - The MBean or null if the MBean was not found.
         */
        public Object findServiceDomain(MBeanServerConnection connection,
                                        String mbeanName, String mbeanType,
                                        String parent) {
            try {
                ObjectName on = new ObjectName(ServiceDomainMBean.OBJECT_NAME);
                return (ServiceDomainMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                         on);
            } catch (MalformedObjectNameException e) {
                e.printStackTrace();
                return null;
            }
        }
    }

    public MyProxyServiceStatisticsRetriever(HashMap props) {
        super();
        try {
            String comment = null;
            String[] arrayResourceNames =
            { "Initialization_request", "Enrichment_request",
              "RouteToADatabaseProcedure",
              "Enrichment_response",
              "Initialization_response" };
            List filteredResourceNames =
                Arrays.asList(arrayResourceNames);

            Properties properties = new Properties();
            properties.putAll(props);

            initServiceDomainMBean(properties.getProperty("HOSTNAME"),
                                   Integer.parseInt(properties.getProperty("PORT")),
                                   properties.getProperty("USERNAME"),
                                   properties.getProperty("PASSWORD"));

            // Save retrieved statistics.
            String fileName =
                properties.getProperty("DIRECTORY") + "\\" + "MyProxyServiceStatistics" +
                "_" +
                new SimpleDateFormat("yyyy_MM_dd").format(new Date(System.currentTimeMillis())) +
                ".txt";
            FileWriter out = new FileWriter(new File(fileName));

            String fileNameForPowerpoint =
                properties.getProperty("DIRECTORY") + "\\" +
                "MyProxyServiceStatisticsForPowerpoint" + ".txt";
            FileWriter outForPowerpoint =
                new FileWriter(new File(fileNameForPowerpoint));


            out.write("*********************************************");
            out.write("\nThis file contains statistics for a proxy service on WebLogic Server " +
                      properties.getProperty("HOSTNAME") + ":" +
                      properties.getProperty("PORT") + " and:");

            out.write("\n\tDomainName: " + serviceDomainMbean.getDomainName());
            out.write("\n\tClusterName: " +
                      serviceDomainMbean.getClusterName());
            for (int i = 0; i < (serviceDomainMbean.getServerNames()).length;
                 i++) {
                out.write("\n\tServerName: " +
                          serviceDomainMbean.getServerNames()[i]);
            }
            out.write("\n***********************************************");

            proxyServiceRefs =
                    serviceDomainMbean.getMonitoredProxyServiceRefs();

            if (proxyServiceRefs != null && proxyServiceRefs.length != 0) {

                filteredProxyServiceRefs = new Ref[1];
                for (int i = 0; i < proxyServiceRefs.length; i++) {
                    System.out.println("ProxyService fullName: " +
                                       proxyServiceRefs[i].getFullName());
                    if (proxyServiceRefs[i].getFullName().equalsIgnoreCase("MyProxyService")) {
                        filteredProxyServiceRefs[0] = proxyServiceRefs[i];
                    }
                }
                if (filteredProxyServiceRefs != null &&
                    filteredProxyServiceRefs.length != 0) {
                    for (int i = 0; i < filteredProxyServiceRefs.length; i++) {
                        System.out.println("Filtered proxyService fullName: " +
                                           filteredProxyServiceRefs[i].getFullName());
                    }
                }

                System.out.println("Started...");
                for (ResourceType resourceType : ResourceType.values()) {
                    // Only process the following resource types: SERVICE,FLOW_COMPONENT,WEBSERVICE_OPERATION
                    if (resourceType.name().equalsIgnoreCase("URI")) {
                        continue;
                    }
                    HashMap proxyServiceResourceStatisticMap =
                        serviceDomainMbean.getProxyServiceStatistics(filteredProxyServiceRefs,
                                                                     resourceType.value(),
                                                                     null);

                    for (Map.Entry mapEntry :
                         proxyServiceResourceStatisticMap.entrySet()) {
                        System.out.println("======= Printing statistics for service: " +
                                           mapEntry.getKey().getFullName() +
                                           " and resourceType: " +
                                           resourceType.toString() +
                                           " =======");

                        if (resourceType.toString().equalsIgnoreCase("SERVICE")) {
                            comment =
                                    "(Comparable to Service Bus Console | Service Monitoring Details | Service Metrics)";
                        } else if (resourceType.toString().equalsIgnoreCase("FLOW_COMPONENT")) {
                            comment =
                                    "(Comparable to Service Bus Console | Service Monitoring Details | Pipeline Metrics )";
                        } else if (resourceType.toString().equalsIgnoreCase("WEBSERVICE_OPERATION")) {
                            comment =
                                    "(Comparable to Service Bus Console | Service Monitoring Details | Operations)";
                        }
                        out.write("\n\n======= Printing statistics for service: " +
                                  mapEntry.getKey().getFullName() +
                                  " and resourceType: " +
                                  resourceType.toString() + " " + comment +
                                  " =======");
                        ServiceResourceStatistic serviceStats =
                            mapEntry.getValue();

                        out.write("\nStatistic collection time is - " +
                                  new Date(serviceStats.getCollectionTimestamp()));
                        try {
                            ResourceStatistic[] resStatsArray =
                                serviceStats.getAllResourceStatistics();

                            for (ResourceStatistic resStats : resStatsArray) {
                                if (resourceType.toString().equalsIgnoreCase("FLOW_COMPONENT") &&
                                    !filteredResourceNames.contains(resStats.getName())) {
                                    continue;
                                }
                                if (resourceType.toString().equalsIgnoreCase("WEBSERVICE_OPERATION") &&
                                    !resStats.getName().equalsIgnoreCase("MyGetDataOperation")) {
                                    continue;
                                }

                                // Print resource information
                                out.write("\nResource name: " +
                                          resStats.getName());
                                out.write("\n\tResource type: " +
                                          resStats.getResourceType().toString());

                                // Now get and print statistics for this resource
                                StatisticValue[] statValues =
                                    resStats.getStatistics();
                                for (StatisticValue value : statValues) {
                                    if (resourceType.toString().equalsIgnoreCase("SERVICE") &&
                                        !value.getName().equalsIgnoreCase("response-time")) {
                                        continue;
                                    }
                                    if (resourceType.toString().equalsIgnoreCase("FLOW_COMPONENT") &&
                                        !value.getType().toString().equalsIgnoreCase("INTERVAL")) {
                                        continue;
                                    }
                                    if (resourceType.toString().equalsIgnoreCase("WEBSERVICE_OPERATION") &&
                                        !value.getType().toString().equalsIgnoreCase("INTERVAL")) {
                                        continue;
                                    }

                                    out.write("\n\t\tStatistic Name - " +
                                              value.getName());
                                    out.write("\n\t\tStatistic Type - " +
                                              value.getType());

                                    // Determine statistics type
                                    if (value.getType() ==
                                        StatisticType.INTERVAL) {
                                        StatisticValue.IntervalStatistic is =
                                            (StatisticValue.IntervalStatistic)value;

                                        // Print interval statistics values
                                        out.write("\n\t\t\tMessage Count: " +
                                                  is.getCount());
                                        out.write("\n\t\t\tMin Response Time: " +
                                                  formatToTime(is.getMin()));
                                        out.write("\n\t\t\tMax Response Time: " +
                                                  formatToTime(is.getMax()));
                                        /* out.write("\n\t\t\tSum Value - " +
                                                  is.getSum()); */
                                        out.write("\n\t\t\tOverall Avg. Response Time: " +
                                                  formatToTime(is.getAverage()));

                                        if (resourceType.toString().equalsIgnoreCase("SERVICE")) {
                                            outForPowerpoint.write("CODE_SERVICE_" +
                                                                   value.getName() +
                                                                   ";" +
                                                                   formatToTimeForPowerpoint(is.getAverage()));
                                        }
                                        if (resourceType.toString().equalsIgnoreCase("FLOW_COMPONENT")) {
                                            outForPowerpoint.write("\r\nCODE_" +
                                                                   resStats.getName() +
                                                                   "_" +
                                                                   value.getName() +
                                                                   ";" +
                                                                   formatToTimeForPowerpoint(is.getAverage()));
                                        }
                                    } else if (value.getType() ==
                                               StatisticType.COUNT) {
                                        StatisticValue.CountStatistic cs =
                                            (StatisticValue.CountStatistic)value;

                                        // Print count statistics value
                                        out.write("\n\t\t\t\tCount Value - " +
                                                  cs.getCount());
                                    } else if (value.getType() ==
                                               StatisticType.STATUS) {
                                        StatisticValue.StatusStatistic ss =
                                            (StatisticValue.StatusStatistic)value;
                                        // Print count statistics value
                                        out.write("\n\t\t\t\t Initial Status - " +
                                                  ss.getInitialStatus());
                                        out.write("\n\t\t\t\t Current Status - " +
                                                  ss.getCurrentStatus());
                                    }
                                }
                            }

                            out.write("\n=========================================");

                        } catch (MonitoringNotEnabledException mnee) {
                            // Statistics not available
                            out.write("\nWARNING: Monitoring is not enabled for this service... Do something...");
                            out.write("\n=====================================");

                        } catch (InvalidServiceRefException isre) {
                            // Invalid service
                            out.write("\nERROR: Invlaid Ref. May be this service is deleted. Do something...");
                            out.write("\n======================================");
                        } catch (MonitoringException me) {
                            // Statistics not available
                            out.write("\nERROR: Failed to get statistics for this service...Details: " +
                                      me.getMessage());
                            me.printStackTrace();
                            out.write("\n======================================");
                        }
                    }
                }
                System.out.println("Finished");
            }
            // Flush and close file.
            out.flush();
            out.close();
            // Flush and close file.
            outForPowerpoint.flush();
            outForPowerpoint.close();


        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    public static void main(String[] args) {
        try {
            if (args.length <= 0) {
                System.out.println("Use the following arguments: HOSTNAME, PORT, USERNAME, PASSWORD DIRECTORY. For example: appserver01 7001 weblogic weblogic C:\\temp");

            } else {
                HashMap map = new HashMap();

                map.put("HOSTNAME", args[0]);
                map.put("PORT", args[1]);
                map.put("USERNAME", args[2]);
                map.put("PASSWORD", args[3]);
                map.put("DIRECTORY", args[4]);
                MyProxyServiceStatisticsRetriever myProxyServiceStatisticsRetriever =
                    new MyProxyServiceStatisticsRetriever(map);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

The VBA module

Sub ReadFromFile()

 Dim FileNum As Integer
 Dim FileName As String
 Dim InputBuffer As String
 Dim oSld As Slide
 Dim oShp As Shape
 Dim oTxtRng As TextRange
 Dim oTmpRng As TextRange
 Dim strWhatReplace As String, strReplaceText As String
 Dim property As Variant
 Dim key As String
 Dim value As String
 Dim sImagePath As String
 Dim sImageName As String
 Dim sPrefix As String
 Dim lPixwidth As Long    '  size in pixels of exported image
 Dim lPixheight As Long


 FileName = "C:\temp\MyProxyServciceStatisticsForPowerpoint.txt"
 FileNum = FreeFile

 On Error GoTo Err_ImageSave

 sImagePath = "C:\temp"
 sPrefix = "MyproxyservciceStatistics"
 lPixwidth = 1024
 ' Set height proportional to slide height
 lPixheight = (lPixwidth * ActivePresentation.PageSetup.SlideHeight) / ActivePresentation.PageSetup.SlideWidth


' A little error checking
 If Dir$(FileName)  "" Then ' the file exists, it's safe to continue
   Open FileName For Input As FileNum

   While Not EOF(FileNum)
    Input #FileNum, InputBuffer
    ' Do whatever you need to with the contents of InputBuffer
    'MsgBox InputBuffer
    property = Split(InputBuffer, ";")
    For element = 0 To UBound(property)
      If element = 0 Then
        key = property(element)
      End If
      If element = 1 Then
        value = property(element)
      End If
    Next element
    ' MsgBox key
    ' MsgBox value

    ' write find text
    strWhatReplace = key
     ' write change text
    strReplaceText = value
    ' MsgBox strWhatReplace

    ' go during each slides
    For Each oSld In ActivePresentation.Slides
         ' go during each shapes and textRanges
        For Each oShp In oSld.Shapes
            If oShp.Type = msoTextBox Then

                ' replace in TextFrame
                Set oTxtRng = oShp.TextFrame.TextRange
                Set oTmpRng = oTxtRng.Replace( _
                FindWhat:=strWhatReplace, _
                Replacewhat:=strReplaceText, _
                WholeWords:=True)

                Do While Not oTmpRng Is Nothing

                    Set oTxtRng = oTxtRng.Characters _
                    (oTmpRng.Start + oTmpRng.Length, oTxtRng.Length)
                    Set oTmpRng = oTxtRng.Replace( _
                    FindWhat:=strWhatReplace, _
                    Replacewhat:=strReplaceText, _
                    WholeWords:=True)
                Loop
                oShp.TextFrame.WordWrap = False

            End If
        Next oShp
        sImageName = sPrefix & "-" & oSld.SlideIndex & ".png"
        oSld.Export sImagePath & "\" & sImageName, "PNG", lPixwidth, lPixheight

    Next oSld
   Wend

   Close FileNum
   MsgBox "Gereed"
 Else
   ' the file isn't there. Don't try to open it.
 End If

Err_ImageSave:
     If Err  0 Then
       MsgBox Err.Description
     End If

End Sub

MyProxyServiceStatistics_2016_01_14.txt

*********************************************
This file contains statistics for a proxy service on WebLogic Server appserver01:7001and:
	DomainName: DM_OSB_DEV1
	ClusterName: CL_OSB_01
	ServerName: MS_OSB_01
	ServerName: MS_OSB_02
***********************************************

======= Printing statistics for service: MyProxyService and resourceType: SERVICE (Comparable to Service Bus Console | Service Monitoring Details | Service Metrics) =======
Statistic collection time is - Thu Jan 14 11:26:00 CET 2016
Resource name: Transport
	Resource type: SERVICE
		Statistic Name - response-time
		Statistic Type - INTERVAL
			Message Count: 5
			Min Response Time: 0 secs 552 msecs
			Max Response Time: 1 secs 530 msecs
			Overall Avg. Response Time: 0 secs 820 msecs
=========================================

======= Printing statistics for service: MyProxyService and resourceType: FLOW_COMPONENT (Comparable to Service Bus Console | Service Monitoring Details | Pipeline Metrics ) =======
Statistic collection time is - Thu Jan 14 11:26:00 CET 2016
Resource name: MyGetDataOperation
	Resource type: FLOW_COMPONENT
		Statistic Name - Validation_request
		Statistic Type - INTERVAL
			Message Count: 5
			Min Response Time: 0 secs 0 msecs
			Max Response Time: 0 secs 0 msecs
			Overall Avg. Response Time: 0 secs 0 msecs
		Statistic Name - Validation_response
		Statistic Type - INTERVAL
			Message Count: 5
			Min Response Time: 0 secs 0 msecs
			Max Response Time: 0 secs 0 msecs
			Overall Avg. Response Time: 0 secs 0 msecs
		Statistic Name - Authorization_request
		Statistic Type - INTERVAL
			Message Count: 5
			Min Response Time: 0 secs 42 msecs
			Max Response Time: 0 secs 62 msecs
			Overall Avg. Response Time: 0 secs 52 msecs
		Statistic Name - Authorization_response
		Statistic Type - INTERVAL
			Message Count: 5
			Min Response Time: 0 secs 0 msecs
			Max Response Time: 0 secs 0 msecs
			Overall Avg. Response Time: 0 secs 0 msecs
Resource name: Initialization_request
	Resource type: FLOW_COMPONENT
		Statistic Name - elapsed-time
		Statistic Type - INTERVAL
			Message Count: 5
			Min Response Time: 0 secs 0 msecs
			Max Response Time: 0 secs 1 msecs
			Overall Avg. Response Time: 0 secs 0 msecs
Resource name: Enrichment_request
	Resource type: FLOW_COMPONENT
		Statistic Name - elapsed-time
		Statistic Type - INTERVAL
			Message Count: 5
			Min Response Time: 0 secs 196 msecs
			Max Response Time: 0 secs 553 msecs
			Overall Avg. Response Time: 0 secs 298 msecs
Resource name: Initialization_response
	Resource type: FLOW_COMPONENT
		Statistic Name - elapsed-time
		Statistic Type - INTERVAL
			Message Count: 5
			Min Response Time: 0 secs 1 msecs
			Max Response Time: 0 secs 3 msecs
			Overall Avg. Response Time: 0 secs 2 msecs
Resource name: RouteToADatabaseProcedure
	Resource type: FLOW_COMPONENT
		Statistic Name - elapsed-time
		Statistic Type - INTERVAL
			Message Count: 5
			Min Response Time: 0 secs 116 msecs
			Max Response Time: 0 secs 174 msecs
			Overall Avg. Response Time: 0 secs 146 msecs
Resource name: Enrichment_response
	Resource type: FLOW_COMPONENT
		Statistic Name - elapsed-time
		Statistic Type - INTERVAL
			Message Count: 5
			Min Response Time: 0 secs 119 msecs
			Max Response Time: 0 secs 411 msecs
			Overall Avg. Response Time: 0 secs 230 msecs
=========================================

======= Printing statistics for service: MyProxyService and resourceType: WEBSERVICE_OPERATION (Comparable to Service Bus Console | Service Monitoring Details | Operations) =======
Statistic collection time is - Thu Jan 14 11:26:00 CET 2016
Resource name: MyGetDataOperation
	Resource type: WEBSERVICE_OPERATION
		Statistic Name - elapsed-time
		Statistic Type - INTERVAL
			Message Count: 5
			Min Response Time: 0 secs 550 msecs
			Max Response Time: 1 secs 95 msecs
			Overall Avg. Response Time: 0 secs 731 msecs
=========================================

MyProxyServiceStatisticsForPowerpoint.txt

CODE_SERVICE_response-time;0secs820msecs
CODE_Validation_request;0secs0msecs
CODE_Validation_response;0secs0msecs
CODE_Authorization_request;0secs52msecs
CODE_Authorization_response;0secs0msecs
CODE_Initialization_request_elapsed-time;0secs0msecs
CODE_Enrichment_request_elapsed-time;0secs298msecs
CODE_Initialization_response_elapsed-time;0secs2msecs
CODE_RouteToADatabaseProcedure_elapsed-time;0secs146msecs
CODE_Enrichment_response_elapsed-time;0secs230msecs
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Doing performance measurements of an OSB Proxy Service by programmatically extracting performance metrics via the ServiceDomainMBean and presenting them as an image via a PowerPoint VBA module appeared first on AMIS Oracle and Java Blog.

REST API on Node.js and Express for data retrieved from Oracle Database with node-oracledb Database Driver running on Application Container Cloud

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

This article is a follow up on my previous article Linking Application Container Cloud to DBaaS – Expose REST API from node.js application leveraging node-oracle-database driver. That article describes how a simple Node.js application is configured for deployment on the Oracle Application Container Cloud and how it leverages the node-oracledb database driver that allows Node.js applications to easily connect to an Oracle Database. From the Application Container Cloud, the application discussed uses a cloud Service Binding to access a DBaaS instance also running on the Oracle Public Cloud. The Node.js application returns a JSON message containing details about departments in the DEPARTMENTS table in the HR schema of the DBaaS instance.

The Node.js application itself is very rudimentary. The way it handles the HTTP requests is quite simplistic. It does not leverage most common practices in Node.js or JavaScript. It does not handle bind parameters in the queries nor does it interpret URL path parameters or query parameters. In this article, I will move beyond my initial attempt to add a little more sophistication on all these fronts. The resulting application:

  • uses bind parameters in accessing the database
  • handles routing in a more elegant way (using Express)
  • handles query parameters
  • handles URL path segments

Add Express based Routing

To make use of Express in the application, I need to install the Express package, using npm:

npm install express –save

image

The –save setting causes a dependency on Express to be added in package.json:

image

The installation by npm downloads modules and adds them to the application file system directories:

image

When I package the application for deployment to the Application Container Cloud, all Express resources need to be included in the application archive.

In the code itself, express is imported by adding a require statement:

var express = require(‘express’);

The main object used for leveraging Express is usually called app:

var app = express();

From here on, the application is reorganized Express style:

var http = require('http');
var express = require('express');
var app = express();

var PORT = process.env.PORT || 8089;

app.listen(PORT, function () {
console.log('Server running, Express is listening...');
});

app.get('/', function (req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write("No Data Requested, so none is returned");
res.end();
});

app.get('/departments', function(req,res){ handleAllDepartments(req, res);} );

app.get('/departments/:departmentId', function(req,res){
} );

function handleAllDepartments(request, response) {
}

 

Recognize and Handle Query Parameters and URL Path Segments

Using Express functionality it is quite straightforward to extract parameters from the HTTP request. Assuming a request such as /departments/100 or /departments?name=S%, we want to be able to extract the values 100 and S%. There is a distinction between the values passed in the URL path and those provided as query parameter

The first category is extracted using:

var departmentIdentifier = req.params.departmentId; — used to extract 100 from /departments/100 assuming /departments/:departmentId as the URL pattern Express listens to

and the second with:

var departmentName = request.query.name ; — used to extract S% from /departments?name=S% assuming /departments as the URL pattern Express listens to

The application now looks like this:

var http = require('http');
var express = require('express');
var app = express();

var PORT = process.env.PORT || 8089;

app.listen(PORT, function () {
console.log('Server running, Express is listening...');
});

app.get('/', function (req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write("No Data Requested, so none is returned");
res.end();
});

app.get('/departments', function(req,res){ handleAllDepartments(req, res);} );

app.get('/departments/:departmentId', function(req,res){
var departmentIdentifier = req.params.departmentId;
} );

function handleAllDepartments(request, response) {
var departmentName = request.query.name ||'%';
} //handleAllDepartments

Use Bind Parameters in Database Queries

The parameters we extracted above are to be used in the queries executed against the database. And these parameters should be passed in as bind parameters (for reasons like SQL Injection prevention and reuse of database execution plans). Bind parameters are easily used with node-oracledb:

  var selectStatement = “SELECT department_id, department_name FROM departments where department_name like :department_name“;
connection.execute(   selectStatement
, [departmentName], {
outFormat: oracledb.OBJECT // Return the result as Object
}

        ,…

Bind parameters are defined in the query in the familiar way: using identifiers prefixed with a colon.

The second parameter in the call to connection.execute is an array with the values of the bind parameters. In this case – with a single bind parameter defined in the query – there has to be a single value in this array. There are no requirements on the naming of the bind parameter.

The entire application is now defined as follows:

var http = require('http');
var oracledb = require('oracledb');
var express = require('express');
var app = express();

var PORT = process.env.PORT || 8089;

app.listen(PORT, function () {
console.log('Server running, Express is listening...');
});

app.get('/', function (req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write("No Data Requested, so none is returned");
res.end();
});

app.get('/departments', function(req,res){ handleAllDepartments(req, res);} );

app.get('/departments/:departmentId', function(req,res){
var departmentIdentifier = req.params.departmentId;
handleDatabaseOperation( req, res, function (request, response, connection) {
var selectStatement = "SELECT employee_id, first_name, last_name, job_id FROM employees where department_id= :department_id";
connection.execute( selectStatement
, [departmentIdentifier], {
outFormat: oracledb.OBJECT // Return the result as Object
}, function (err, result) {
if (err) {
console.log('Error in execution of select statement'+err.message);
response.writeHead(500, {'Content-Type': 'application/json'});
response.end(JSON.stringify({
status: 500,
message: "Error getting the employees for the department "+departmentIdentifier,
detailed_message: err.message
})
);
} else {
console.log('db response is ready '+result.rows);
response.writeHead(200, {'Content-Type': 'application/json'});
response.end(JSON.stringify(result.rows));
}
doRelease(connection);
}
);
});
} );

function handleDatabaseOperation( request, response, callback) {
console.log(request.method + ":" + request.url );
response.setHeader('Access-Control-Allow-Origin', '*');
response.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT, PATCH, DELETE');
response.setHeader('Access-Control-Allow-Headers', 'X-Requested-With,content-type');
response.setHeader('Access-Control-Allow-Credentials', true);

console.log('Handle request: '+request.url);
var connectString = process.env.DBAAS_DEFAULT_CONNECT_DESCRIPTOR.replace("PDB1", "demos");
console.log('ConnectString :' + connectString);
oracledb.getConnection(
{
user : process.env.DB_USER || "hr",
password : process.env.DB_PASSWORD || "hr",
connectString : connectString
},
function(err, connection)
{
if (err) {
console.log('Error in acquiring connection ...');
console.log('Error message '+err.message);

// Error connecting to DB
response.writeHead(500, {'Content-Type': 'application/json'});
response.end(JSON.stringify({
status: 500,
message: "Error connecting to DB",
detailed_message: err.message
}
));
return;
}
// do with the connection whatever was supposed to be done
console.log('Connection acquired ; go execute ');
callback(request, response, connection);
});
}//handleDatabaseOperation

function handleAllDepartments(request, response) {
handleDatabaseOperation( request, response, function (request, response, connection) {
var departmentName = request.query.name ||'%';

var selectStatement = "SELECT department_id, department_name FROM departments where department_name like :department_name";
connection.execute( selectStatement
, [departmentName], {
outFormat: oracledb.OBJECT // Return the result as Object
}, function (err, result) {
if (err) {
console.log('Error in execution of select statement'+err.message);
response.writeHead(500, {'Content-Type': 'application/json'});
response.end(JSON.stringify({
status: 500,
message: "Error getting the departments",
detailed_message: err.message
})
);
} else {
console.log('db response is ready '+result.rows);
response.writeHead(200, {'Content-Type': 'application/json'});
response.end(JSON.stringify(result.rows));
}
doRelease(connection);
}
);

});
} //handleAllDepartments

function doRelease(connection)
{
connection.release(
function(err) {
if (err) {
console.error(err.message);
}
});
}

Invoke the REST API

With the implementation of the dataApi.js application it now supports the following calls:

image

to retrieve all departments, and

image

to only retrieve departments for which the name starts with an S and to get all departments with a u in their name:

image

and finally to retrieve all employees in a specific department:

image

 

 

Resources

Home of Express.

Samples for using node-oracledb to interact with an Oracle Database

My previous article Linking Application Container Cloud to DBaaS – Expose REST API from node.js application leveraging node-oracle-database driver – to explain the basics for creating a Node.js application using node-oracledb and for configuring a Node.js application on Oracle Application Container Cloud to link with a DBaaS instance.

Stack Overflow on recognizing URL segments and query parameters http://stackoverflow.com/questions/14417592/node-js-difference-between-req-query-and-req-params

Download the dataApi.zip Node.js application.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post REST API on Node.js and Express for data retrieved from Oracle Database with node-oracledb Database Driver running on Application Container Cloud appeared first on AMIS Oracle and Java Blog.

Deploying an Oracle JET application to Application Container Cloud and running on Node.js

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

An article from the Oracle A-Team (http://www.ateam-oracle.com/oracle-jet-with-nodejs/) describes how to run a sample Oracle JET application on a Node.js server. I have followed the steps in that article, ran the JET application locally and then configured the application for deployment to the Oracle Application Container Cloud. Subsequently, I created a new application on that cloud based on the application archive for the JET application and accessed the application. This article briefly describes the steps I went through. It should make clear how to get JET applications to run on the Oracle Application Container Cloud.

image

The steps in short are:

  • install Node.js – which includes npm, the Node Package manager
  • install express using npm
  • install express-generator using npm
  • generate the oraclejetwithnodejs application scaffold using express
  • add dependencies using npm
  • download the Oracle JET Starter Template Quickstart sample application ZIP file and extract the contents into the oraclejetwithnodejs application directory structure
  • create a small JavaScript file (partials.js), make some small modifications in another JavaScript file (app.js) and move a directory
  • run the application locally to verify it is working correctly
  • create file manifest.json that is required by the Application Container Cloud
  • create an application archive (zip file) for the entire oraclejetwithnodejs application
  • created a new application on the Application Container Cloud based on that application archive for the JET application
  • access the application from any browser anywhere in the world

It turns out that the steps that are really specific for the Application Container Cloud are minuscule: the last four bullets in this list. It is extremely simple, in short, to run an Oracle JET application on Node.js and subsequently in a Node.js container in the Application Container Cloud.

Prepare for a new Node.js Environment and a fresh Application

Following the instructions in the article from the Oracle A-Team (http://www.ateam-oracle.com/oracle-jet-with-nodejs/), I have through these preparatory steps:

  • install Node.js – which includes npm, the Node Package manager; in my case I downloaded Node.js for Windows from the Home of Node.js. Probably even easier would have been to work with a Docker Container image with a Node.js environment set up.
  • install express using npm (npm install express –save)
  • install express-generator using npm (npm install express-generator -g)
  • generate the oraclejetwithnodejs application scaffold using express (express oraclejetwithnodejs)
  • add dependencies using npm (npm install)

At this point, a directory structure has been created with the basic scaffolding for the new Node.js application, called oraclejetwithnodejs. Nothing has been done that is specific to Oracle JET.

The application – while still bare – can already be run. On Windows, the following command will run Node.js, execute app.js in the application and start listening on local port 3000:

SET DEBUG=oraclejetwithnodejs:* & npm start

(alternatively just node ./bin/www).

Add the Oracle JET Sample application

The Oracle JET Starter Template Quickstart sample application is a client side web application (HTML, JavaScript, CSS and images). It is added to the Node.js application structure mainly as a set of static files that Node.js will serve to the browser requesting these files. The actual JET related work all happens in the browser. It can be a bit confusing to have two kinds of JavaScript files: those that are executed by Node.js and those that are served by Node.js to the browser and that run inside the browser, far away from Node.js.

First, download the Oracle JET Starter Template Quickstart sample application ZIP file from OTN Oracle JET Quick Start application from OTN.

image

Extract the contents into the oraclejetwithnodejs application directory structure, under the public folder in the application scaffold structure.

Here is an overview of the contents of the ZIP file:

image

Move the templates folder under public to the views folder. The resulting directory structure is shown here:

image

As per the instructions in the A-Team article, create a small JavaScript file called partials.js in directory routes.

module.exports = function (basepath) {
return {
process: function (req, res) {
res.sendFile('templates/' + req.params.name, {root: basepath + '/views/'});
}
};
}

This little module takes care of handling requests from the browser for templates. These will be served from the views/templates directory.

Next, make some small modifications in another JavaScript file,  app.js in root of the application.

These two lines – close to the beginning of the file – ensure that the partials module gets loaded into the main application:

var loadPartials = require('./routes/partials.js');
var partials = loadPartials(__dirname);

This line, located after the variable app has been created from module express(), takes care of directing any HTTP request for /templates/some-file-name to the partials module:

// load the templaces using the partials
app.get('/templates/:name', partials.process);

These steps are all it takes to merge the Oracle JET application into the Node.js scaffold application.

Stop the Node.js server if it is still running and start the application locally to verify it is working correctly:

SET DEBUG=oraclejetwithnodejs:* & npm start

image

Prepare the Application for the Application Container Cloud

Not much is needed to prepare this application for deployment on Application Container Cloud (also see this article for an introduction into Node.js application on Application Container Cloud).

We need to create a file called manifest.json that is required by the Application Container Cloud. This file specifies name, version details, a description and most importantly: the operating system command to execute in order to launch the application:

{
"runtime":{
"majorVersion":"0.12"
},
"command": "node ./bin/www",
"release": {},
"notes": "Sample, Quick Start Oracle JET Application prepared to run on Oracle Application Container Cloud"
}

Application Container Cloud also needs the package.json file with Node.js dependencies; this file was created at the time of generating the scaffold application.

Create an application archive (zip file) for the entire oraclejetwithnodejs application.

image

 

Create new Application in Application Container Cloud

From the dashboard of the Application Container Cloud, click on Create Application and choose Node.js application. Provide a name and some details and elect to upload the application archive.

image

Press Create.

The application is uploaded. Then the creation (provisioning) will take place.

After a little while, the application [container]is provisioned and the application is deployed and ready for action:

image

Click on the hyperlink to access the application’s main entry point:

 

image

And here we have the Oracle JET sample application, running from the Oracle Application Container Cloud. By doing just a few small things (add manifest.json with proper start command, create overall application archive, create application on Application Container Cloud  based on archive) – we managed to pull this off.

 

Resources

Download the Oracle JET application, configured for deployment on Application Container Cloud: oraclejetwithnodejs.zip (NOTE: the folder public\js\libs in this zip-file is empty; you need to add the libraries here that Oracle JET makes use of – in my case 37 MB worth of JavaScript libraries).

Oracle JET Home Page

Download Oracle JET Quick Start application from OTN

An article from the Oracle A-Team on how to run a sample Oracle JET application on a Node.js server: http://www.ateam-oracle.com/oracle-jet-with-nodejs/

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Deploying an Oracle JET application to Application Container Cloud and running on Node.js appeared first on AMIS Oracle and Java Blog.

First setup of a connection from Node.js to an Oracle Database

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

In this article I will demonstrate how to make a connection to a remote Oracle database from Node.js running on linux 7. We will be using the node-oracledb module to accomplish this. Lucas Jellema gave a great explanation about this module in his recent article Running node-oracledb – the Oracle Database Driver for Node.js – in the Pre Built VM for Database Development

As described in that article, the node-oracldb module is depending on the Oracle 11.2 or 12.1 client libraries. So you need to install a full Oracle client, local database or the Oracle Instant Client. I will be using the Oracle Instant Client since it is small and easy to install.

Why Linux 7?

As of Node.js 4 the compiler must support C++11.
This is not included in the default compiler on Linux 6. You can either install another compiler or use Linux 7.

Setup

In this article I will be using the following setup.

Oracle Linux 7 VM on VirtualBox
Node.js 4.4.2 (64-bits)
node-oracledb 1.8
Oracle Instantclient 12.1.0.2.0

OS prerequisites

unzip
libaio
gcc-c++

Use yum to install the OS prerequisites.

yum install unzip libaio gcc-c++

On another VM, I have an Oracle Database 12.1.0.2.0 pluggable database running.

Download the components

node-v4.4.2-linux-x64.tar.xz from nodejs.org
instantclient-basic-linux.x64-12.1.0.2.0.zip from Oracle OTN
instantclient-sdk-linux.x64-12.1.0.2.0.zip from Oracle OTN
node-oracledb will be installed via node package manager, npm

Put the files in the /tmp directory of the VM using any sftp tool you like.


We will remove them when we are done.

Installing components

Logon as root (or use sudo)

Install Node.js

cd /opt
tar -Jxf /tmp/node-v4.4.2-linux-x64.tar.xz

Install Oracle instant client

mkdir /opt/oracle
cd /opt/oracle
unzip -q /tmp/instantclient-basic-linux.x64-12.1.0.2.0.zip
unzip -q /tmp/instantclient-sdk-linux.x64-12.1.0.2.0.zip

Rename the directory so we don’t have to tell the installer where to find the OCI libraries, etc…
If you install the Oracle Instant Client in another location, you will have to set two environment variables, OCI_LIB_DIR and OCI_INC_DIR before installing th oracledb module. See INSTALL.md on Github for more details about this.

mv instantclient_12_1 instantclient
cd instantclient
ln -s libclntsh.so.12.1 libclntsh.so

Remove files from /tmp

rm /tmp/instantclient-* node-v4.4.2-linux-x64.tar.xz

Install oracledb module

You can choose to install the module local to the user of global for the system.
If you install it local to the user, you don’t need to be a privileged user. You can choose any user you need to run Node.js.
For this demonstration I created a user called nodejs and will install the module local to the user

Logon as user nodejs

Set environment variables

export PATH=/opt/node-v4.4.2-linux-x64/bin:$PATH
export LD_LIBRARY_PATH=/opt/oracle/instantclient:$LD_LIBRARY_PATH

npm install oracledb

The module is created in /home/nodejs/node_modules

Test the module

On Github there are several example scripts available for use with the node-oracledb module. See. node-oracledb examples

Download dbconfig.js and select1.js for a test of a db connection.
You can either change the dbconfig.js to match you db connection or set some environment variables.

export=NODE_ORACLEDB_USER=hr
export NODE_ORACLEDB_PASSWORD=hr
export NODE_ORACLEDB_CONNECTIONSTRING=db01.domain.local:1521/fmwdb1.domain.local

Run the select1.js

This will perform a simple query on the departments table from the HR sample schema.

Some remarks

Set the environment variables permanent.

You can set the environment variables permanently for either the specific user or system wide.
Place them in the .bash_profile of the user that will run Node.js or create a .sh file in /etc/profile.d so the environment variables are set at logon for every user.

Install oracledb module global and set additional environment variable.

npm install -g oracledb

Set the environment variable NODE_PATH so Node.js knows where to find the modules.

export NODE_PATH=/opt/node-v4.4.2-linux-x64/lib/node_modules

Sources and references

nodejs.org
node-oracledb on Github
Oracle Instant Client on OTN
Running node-oracledb – the Oracle Database Driver for Node.js – in the Pre Built VM for Database Development

 

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post First setup of a connection from Node.js to an Oracle Database appeared first on AMIS Oracle and Java Blog.

Create an oracledb enabled Node.js application container

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

In my previous article, First setup of a connection from Node.js to an Oracle Database, I demonstrated how to make a connection to a remote database using Node.js and the node-oracledb module. I used a dedicated VM with Linux 7 installed and Oracle Instantclient provided the 12.1 client libraries.

Now it’s time to take it a step further.
Let’s create an application container and just start it multiple time running any .js script and be able to connect to an Oracle database.

I will start by demonstrating you how to manually build a Docker image with Node.js and the node-oracledb module. This image can then be used to launch as many application containers as you like. (Depending on your resources off course)

For this setup I have installed Docker on an Oracle Linux 7 VM in VirtualBox.

Create the Docker image manually

Create a Linux base image

Logon as root (or use sudo) on the Oracle Linux VM

First we need an operating system for the container.

Pull docker image of oraclelinux from the Docker hub.

docker pull oraclelinux

Now start an interactive Docker container.

docker run -ti oraclelinux /bin/bash

Within the container we will creating a non-privileged user and install the required OS packages (including dependencies).
The user can be used to run Node.js scripts in the container without root privileges.

useradd nodejs -p '$6$salt$ZjJzVKp5xtoIl7cfXqZe0mQjWeOpsV2pMiIYpWzkR4ExCBpPdT3mi3eXtG1MSawJnZfXFjBcq0UUmenLq1Cj//'

note. I used python to created the encrypted password I used when creating the os user. For your convenience the command: 

python -c 'import crypt; print crypt.crypt("Welcome01", "$6$salt$")'

Install the required OS packages including dependencies

yum -y install unzip libaio gcc-c++ tar make curl

Create the base image

Exit the container and commit the container to create a base image.

exit
docker ps -a
docker commit 51ce97aa511f

Tag the image to give it a name and version, linux-base:1.0

docker images
docker tag 19de63788941 linux-base:1.0
docker images

Install Oracle Instantclient, Node.js and the node-oracledb module

Now that we have a base image, we are going to run a new container based on this image.
I have downloaded the Oracle Instantclient from the OTN site and put them in the /tmp/Downloads directory.

instantclient-basic-linux.x64-12.1.0.2.0.zip from Oracle OTN
instantclient-sdk-linux.x64-12.1.0.2.0.zip from Oracle OTN

Start an interactive container using the created linux-base image and share the /tmp/Downloads directory using a volume in Docker.

docker run -ti -v /tmp/Downloads:/tmp/Downloads linux-base:1.0 /bin/bash

Install Oracle Instantclient

mkdir /opt/oracle
cd /opt/oracle
unzip -q /tmp/Downloads/instantclient-basic-linux.x64-12.1.0.2.0.zip
unzip -q /tmp/Downloads/instantclient-sdk-linux.x64-12.1.0.2.0.zip
mv instantclient_12_1 instantclient
cd instantclient
ln -s libclntsh.so.12.1 libclntsh.so

Install Node.js

Use curl to download the Node.js software from nodejs.org and the linux pipe (|) function to pass it to the tar utility which unpacks the software in the /opt/ directory.

cd /opt
curl -sSL https://nodejs.org/dist/v4.4.2/node-v4.4.2-linux-x64.tar.xz | tar -xJC /opt/

Install node-oracledb module

The node-oracedb will be installed as global module by the npm (node package manager). Before running npm, set some environment parameters so the node binaries are in the search path and the Oracle libraries can be found.

export PATH=/opt/node-v4.4.2-linux-x64/bin:$PATH
export LD_LIBRARY_PATH=/opt/oracle/instantclient:$LD_LIBRARY_PATH
npm install -g oracledb

Create the jpoot/node_oracledb image.

exit
docker commit a42c4d9b4434 jpoot/node_oracledb:1.0

Testing the created Docker image

Now that I have created the Node.js enabled image, I can test the functionality of it.

I have downloaded examples scripts from de node-oracle/examples on Github.

Start with running a simple select1.js against an Oracle database. This script conects to the database and selects one row from the Departments table.
I have an Oracle Database 12.1.0.2.0 pluggable database running on a separate VM, with the Oracle example schema’s installed in it.
I need to provide some environment variables to provide the PATH, user, password and connect string for the .js scripts to be able to connect to the database.

Create a file called env.list and place the following entries in it.

vi env.list

PATH=/opt/node-v4.4.2-linux-x64/bin:$PATH
LD_LIBRARY_PATH=/opt/oracle/instantclient:$LD_LIBRARY_PATH
NODE_PATH=/opt/node-v4.4.2-linux-x64/lib/node_modules
NODE_ORACLEDB_USER=hr
NODE_ORACLEDB_PASSWORD=hr
NODE_ORACLEDB_CONNECTIONSTRING=192.168.100.45:1521/fmwdb1.domain.local


Explanation of the environment variables

PATH – Add the path to the node and npm binaries to the search path
LD_LIBRARY_PATH – Provides the path to the Oracle libraries
NODE_PATH – Provides the path to the global modules of Node.js
NODE_ORACLE_* – Provides the user, password and connect string to the .js scripts. See dbconfig.js for details

I know, I know… Putting a plaintext password in a file is not secure. Keep in mind that this is for demonstration purposes only.
Don’t do this in any non-demo environment!!!

Run the container with the necessary parameters.

docker run --rm -u nodejs -w /home/nodejs/examples --env-file ./env.list --add-host=db01.domain.local:192.168.100.45 -v /tmp:/home/nodejs/examples jpoot/node_oracledb:1.0 node select1.js

Let’s walk through the parameters of the docker run command I used.

docker run
Main docker command to run the container

–rm
Remove the container after it has completed

-u nodejs
User to run within the container

-w /home/nodejs/examples
Working directory, the container starts in this directory

–env-file ./env.list
File that contains the environment variables to be provided to the container

–add-host=db01.domain.local:192.168.100.45
Add a host entry for the database server to the /etc/hosts file. This is used because I don’t have a DNS server running.

-v /tmp:/home/nodejs/examples
Add a volume to the container, mapping a local directory to a directory in the running container.
In this case this mounts a directory with the .js scripts I want to run. So you don’t have to add the scripts to the image, making it not dependent on script changes.

jpoot/node_oracledb:1.0
Docker image used as base for the container. In this case it is the image I created earlier.

node select1.js
Command to run in the container. Run node command with the select1.js script.

YES!!! The scripts ran correctly in the container.

And now the easy way

In the previous chapters I have created a Docker images that can run Node.js with the capability to connect to an Oracle database.
As you have seen, this involves a lot of manual labor. Manual labor, means greater change of mistakes.

So let’s automate the way to create this image.

In the next steps I will create and use a Dockerfile to automated the creation of the image.

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build, users can create an automated build that executes several command-line instructions in succession.

See. Best practices for writing Dockerfiles and Dockerfile reference for more details on the Dockerfile

Dockerfile

Create an empty directory to place the Dockerfile in.
I will use the same Instantclient zip files as previously downloaded.

instantclient-basic-linux.x64-12.1.0.2.0.zip
instantclient-sdk-linux.x64-12.1.0.2.0.zip

Put these files in the same directory as the Dockerfile for our convenience. (Or create a symlink to the files)

mkdir docker-file
cd docker-file
vi Dockerfile

# Pull Oracle Linux 7 image from Docker hub
FROM oraclelinux

# Install OS packages
RUN yum -y install unzip libaio gcc-c++ tar make curl \
&& useradd nodejs -p ‘$6$salt$ZjJzVKp5xtoIl7cfXqZe0mQjWeOpsV2pMiIYpWzkR4ExCBpPdT3mi3eXtG1MSawJnZfXFjBcq0UUmenLq1Cj//’

# Add Node.js
RUN curl -sSL https://nodejs.org/dist/v4.4.2/node-v4.4.2-linux-x64.tar.xz \
| tar -xJC /opt/
ENV PATH /opt/node-v4.4.2-linux-x64/bin:$PATH

# Add Oracle Instantclient
ADD instantclient-basic-linux.x64-12.1.0.2.0.zip /tmp/
ADD instantclient-sdk-linux.x64-12.1.0.2.0.zip /tmp/

RUN unzip -q /tmp/instantclient-basic-linux.x64-12.1.0.2.0.zip -d /opt/oracle/ \
&& unzip -q /tmp/instantclient-sdk-linux.x64-12.1.0.2.0.zip -d /opt/oracle/ \
&& mv /opt/oracle/instantclient_12_1 /opt/oracle/instantclient \
&& ln -s /opt/oracle/instantclient/libclntsh.so.12.1 /opt/oracle/instantclient/libclntsh.so\
&& rm /tmp/instantclient-*

ENV LD_LIBRARY_PATH /opt/oracle/instantclient

# Install the node-oracledb module as global module to Node.js using npm
RUN npm install -g oracledb

ENV NODE_PATH /opt/node-v4.4.2-linux-x64/lib/node_modules

Build the Docker image

From the docker-file directory run the docker build command an tag the created images with a name and version.

docker build -t="jpoot/node_oracledb:1.1" .

Note. Mind the . at the end of the command. It says, build the image using the Dockerfile here.

Skipping the yum install lines ….

Within two minutes I have a fully functional Docker image ready. J

docker images

Now let’s see if the image works.

Remove the PATH, LD_LIBRARY_PATH and NODE_PATH from the env.list.
These environment variables where already provided in the Dockerfile.

docker run --rm -u nodejs -w /home/nodejs/examples --env-file ./env.list --add-host=db01.domain.local:192.168.100.45 -v /tmp:/home/nodejs/examples jpoot/node_oracledb:1.1 node select1.js

Yeah, it works as expected!!!

To test the functionality I used Node.js to run a sql script. The same Docker image can be used to run other .js scripts.
As another example, run a simple webserver.

docker run -d -u nodejs -w /home/nodejs/examples -p 80:3000/tcp -v /tmp:/home/nodejs/examples jpoot/node_oracledb:1.1 node http.js

Btw. Don’t forget to kill this running container when you’re done.

docker kill <container id>

Remarks

In this article I have shown you two ways to create a Docker image Node.js with a functional node-oracledb module.
There is now right or wrong way to create the image. It is however much easier to use the Dockerfile method. It is fast, easy and prevents human errors. Also, if you want to add modules or functionality to the image just add the commands to the Dockerfile and create a new image in a couple of minutes.

Sources and references

nodejs.org
node-oracledb on Github
Oracle Instant Client on OTN
First setup of a connection from Node.js to an Oracle Database
Running node-oracledb – the Oracle Database Driver for Node.js – in the Pre Built VM for Database Development

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Create an oracledb enabled Node.js application container appeared first on AMIS Oracle and Java Blog.

My first NodeJS service

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Microservices implemented in JavaScript running on NodeJS are becoming quite popular lately. In order to gain some experience with this, I created a little in memory NodeJS cache service. Of course statefulness complicates scalability, but if I would also have implemented a persistent store to avoid this, the scope of this blog article would have become too large. Please mind that my experience with NodeJS is limited to a NodeJS workshop from Lucas Jellema and a day of playing with NodeJS. This indicates it is quite easy to get started. In this blog I’ll highlight some of the challenges I encountered and how I solved them. Also I’m shortly describing what Oracle is doing with NodeJS. Because the JavaScript world changes rapidly, you should also take into account the period between when this blog is written and when you are reading it; it will most likely quickly become outdated. You can download the code from GitHub here.

Choosing an IDE

In the Java world there are several popular IDE’s such as JDeveloper, Eclipse, Netbeans, IntelliJ. For JavaScript, the IDE’s I’ve heard most about from JavaScript developers (as a newby it helps to talk to people with experience) are Microsoft Visual Studio Code and Jetbrain’s WebStorm. Netbeans also has JavaScript support and is the IDE of choice for Oracle JET development. I have not looked into Netbeans yet. I decided on Microsoft Visual Studio Code since WebStorm requires a paid license.

Captureide

NodeJS package manager

The NodeJS package manager is npm. npm can install modules globally and locally. Supporting tools like ‘mocha’ for testing and ‘typings’ for TypeScript support are good candidates to install globally. Do keep track though of your globally installed modules since if you want to reproduce your environment somewhere else, these modules could be dependencies (especially in your build process). You can configure local dependencies in a package.json file. When you do a ‘npm install’, modules mentioned in that file are installed locally in the node_modules folder of your project. If you want to also update the package.json, you can do ‘npm install –save’. This allows you to easily update versions of modules. When your node_modules directory is corrupt because you for example interrupted a module download, you can just remove the node_modules directory and rebuild it from the package.json file.

npm-logo CapturePackage

Code completion

As a spoiled modern developer, I need code completion! This especially helps a lot when you are unfamiliar with a language and want to explore what you can do with a specific object or how to use standard libraries/modules. JavaScript is not strongly typed. You need type information to provide code completion. Microsoft has provided the open source TypeScript to help with that.

typescript2

TypeScript allows you to write .ts files which can be compiled to JavaScript. These .ts files can also be used to allow Visual Studio Code to provide code completion (even when writing JavaScript and not TypeScript). There is a large library of TypeScript definitions available for JavaScript modules. There are two tools I currently know of to easily import .ts files into your project. TSD and Typings. TSD is end of life (https://github.com/DefinitelyTyped/tsd/issues/269). In order to download .ts files for your modules, you can create a typings.json file which indicates the dependencies on TypeScript files. When you do a ‘typings install’, it will download these dependencies into your project in the ‘typings’ directory. When you want to add new dependencies, you can do a command like ‘typings install node –save –ambient’ to save the dependency to the typings.json file. TSD uses a tsd.json file which can be converted to a typings.json file with ‘typings init –upgrade’. You can read more about how to set this up here.

Next to the tsd files, you need to provide Visual Studio Code with compiler options in a jsconfig.json file.

CaptureCompilerOptions

The result looks like: codecompletion

Testing

I used Mocha to perform tests. Why Mocha? I heard about this from colleagues and found a lot of references to ‘mocha’ when reading about JavaScript testing.

mocha

There are a lot of other modules which help with testing. Mocha is installed globally. When executed in the root of your project, it looks for files in the test directory and executes them.

Below some sample test code

CaptureTest

Below what happens when you run Mocha. CaptureMochaTest

For testing as a true outside client, you can use Ready-API or for example Postman, Postman is a Chrome plugin.

CapturePostman

Debugging

In order to debug your project and run NodeJS from Visual Studio Code, you need to provide a launch.json file in . vscode directory.

Capturedebug

This file indicates which JavaScript file to run and where to attach the debugger. You can read about this in more detail here.

Some general observations

Single threaded

A NodeJS instance uses a single thread. Therefore, you should strife to have all your code non-blocking. Most functions support asynchronous interaction. This allows you to perform an action and wait for an asynchronous callback. While ‘waiting’ for the callback, other actions can be performed and other callbacks can be received. This event driven / asynchronous way of programming is something you have to get used to. Scalability can be increased by raising more NodeJS instances. In accordance with Microservices architecture, services should be stateless. My example cache process is not stateless and does will not scale well. Instead of using a local variable I should have used a persistent store.

Few generally accepted standards

This is of course my first impression. There are few generally accepted standards in the NodeJS world. For example, for testing there are many frameworks available. This also means that direct IDE support for a testing framework is difficult or requires manual scripting. Also talking to databases has no standard module (not something like JDBC in the Java world). Several modules use similar ideas and principles however. For example, most of them use JSON configuration files as part of the project. Also since the JavaScript language has its limitations (which becomes more apparent in large projects), there are several scripts which offer useful functionality and can be compiled to JavaScript such as TypeScript.

NodeJS and Oracle

Oracle is of course also investing in NodeJS. There is the Application Container Cloud Service which allows you to run NodeJS applications. You can choose the version of NodeJS on which your application should run. This allows Oracle to (relatively easily) stay up to date with the cloud service and users to choose when to run on which version. The cloud service comes integrated with an Oracle database driver to easily access your Oracle persistent store.

Captureacc

Also rumor has it that NodeJS is used in Mobile Cloud Service under the hood.Capturemcs

Oracle is working on several products which go well with NodeJS such as the API Platform and most likely more is coming in this area.

If you want to know how you can access the Oracle database from NodeJS, look here. If you want to create a Docker container with NodeJS and the Oracle database driver integrated, see here.

NodeJS for orchestration

In Oracle SOA Suite and Service Bus we were used to graphical development to do integrations. Languages like BPEL which are XML based to describe flows. Out of the box functionality like security policies, management interfaces and technology adapters/transports. BPEL or ServiceBus 12.2.1 can run JavaScript in the JVM and supports untyped JSON. NodeJS provides a lot of modules but very little out of the box and not much standardization. It is fast, light and scalable though (when stateless). I can imagine that NodeJS would be especially suitable to provide services which are thin and do not require much functionality (≈microservices). An example could be a layer of data services on top of a persistent store. When however you want to validate your messages, provide traceability, advanced error handling, throttling and require features like I previously mentioned, you can do it in NodeJS but it would require quite some coding (which requires maintenance). You might be better of with something which provides these features out of the box (like SOA Suite) in such a case. If however you want a quick stateless solution to easily integrate two REST/JSON services server-side with JavaScript, NodeJS is probably currently the best solution for that.

Software lifecycle

Because of the speed of JavaScript evolution, the turnover rate of code written using specific frameworks/modules will probably be higher since they are quickly outdated. I noticed however that it is quite easy to get started with new modules; not really a steep learning curve. I can imagine in many cases it will be more easy to start over instead of trying to migrate. Large companies and governments might have to get used to such a way of working.

Some things I have not looked at

Code compliancy

I have not looked at code compliancy inspection yet. Especially for a ‘sloppy’ language like JavaScript, such tooling is important to make larger projects manageable (read here). I heard JSLint for JavaScript and TSLint for TypeScript are popular.

Capturetslint

Capturejslint

Test coverage

For test coverage reporting you also need specific tools. See here for some example modules. Istanbul seems popular when using Mocha.

Building

Grunt and/or Gulp can help you build your project. You require tools like these to help you compile for example your TypeScript to JavaScript and to perform several other build steps such as automate the release process. Also to orchestrate the tasks mentioned above such as code compliancy checking, and test coverage checking, these tools can help.

grund-js-opt gulp-js

Finally

I was surprised how easy it was to get started with NodeJS. The world of JavaScript was new to me and you can see from this blog post what I have learned in just a single day and a workshop. You need very little to get started.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post My first NodeJS service appeared first on AMIS Oracle and Java Blog.

Blog Continuous Delivery and the Oracle database (II)

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

In the previous Blog I have described how to implement CD for an Oracle database by using migration scripts.

In this Blog I will describe how to create migration scripts (automagically).

DML scripts
This is the more simple case because you need them less often. There are various data compare tools which create scripts to migrate the differences between a source and target schema:
dbForge Data Compare for Oracle, v3.7 Express (free)
redgate Data Compare for Oracle (not free)

And you can do it by hand using insert, update, delete or merge statements.

So I will ignore DML scripting for the rest of this Blog.

DDL scripts
‘Off the shelf’ tools
So what is left now, is how to create DDL migration scripts. There are various tools available on the sites just mentioned above. And SQL Developer has its Database Diff tool:

SQL Developer Database Diff

SQL Developer Database Diff

I tried all these tools and I found that these tools:
• are slow;
• have a bad user interface;
• have no (good) command line interface;
• are UI based so another UI to learn;
• cannot logon with OPS$ accounts;
• do not always generate the code correctly which is the main reason for their existence (!).

Do it yourself…
So what to do? Good old DIY. Some investigation of the SQL Developer Database Diff tool showed me that it uses the Oracle packages DBMS_METADATA and – new since Oracle 11g – DBMS_METADATA_DIFF (see SQL Developer Database Diff – Compare Objects From Multiple Schemas). An important remark is that you need an extra license for Oracle 11g DBMS_METADATA_DIFF: the Oracle Enterprise Manager Change Management option is necessary. For Oracle 12c the licensing has changed but you need still something extra. Check it! Luckily my company had all licensing necessary.

Based on these Oracle packages I thought it would be feasible to create a command line tool without all the inconveniences. In the end I succeeded but it took some time to solve the technical problems… I will describe them later, after the requirements.

Design requirements
The design requirements:
• A command line interface is necessary (for Continuous Delivery)
• You have to be able to specify a source and target schema
• Filtering on object type (TABLE, PROCEDURE, VIEW, etc.) must be possible
• Filtering on object names (inclusive or exclusive) too
• Source and target schemas may have different names
• Source and target schemas may reside in different databases
• You must use database links to logon to a remote source or target schema
• The account which runs the tool does not have to be the source or target schema
• The privileges of the account which runs the tool must be sufficient to lookup any schema object

DBMS_METADATA_DIFF and DBMS_METADATA
According to the Oracle documentation, the following DBMS_METADATA_DIFF object types can be compared: CLUSTER, CONTEXT, DB_LINK, FGA_POLICY, INDEX, MATERIALIZED_VIEW, MATERIALIZED_VIEW_LOG, QUEUE, QUEUE_TABLE, RLS_CONTEXT, RLS_GROUP, RLS_POLICY, ROLE, SEQUENCE, SYNONYM, TABLE, TABLESPACE, TRIGGER, TYPE, TYPE_SPEC, TYPE_BODY, USER and VIEW. Please note that DBMS_METADATA_DIFF object types may differ from this in SELECT DISTINCT OBJECT_TYPE FROM ALL_OBJECTS.

A PACKAGE for instance cannot be compared using DBMS_METADATA_DIFF, but there is a simple work-around: use DBMS_METADATA to generate the code for both schemas and compare them.

DBMS_METADATA_DIFF has some handy COMPARE functions which return a CLOB containing the differences (one or more DDL statements):
• COMPARE_SXML
• COMPARE_ALTER
• COMPARE_ALTER_XML

The biggest difficulty with these functions is to extract the separate DDL statements for one object if the CLOB contains several DDL statements. For example when a TABLE CLOB contains various ALTER statements or when a TRIGGER CLOB contains the trigger definition and an extra enable trigger statement (as DBMS_METADATA_DIFF creates for you).

That is why I did not use those DBMS_METADATA_DIFF COMPARE functions but some more basic subprograms:

Subprogram Description
OPENC Input: the type of objects to be compared
Output: handle
ADD_DOCUMENT Specifies an SXML document to be compared. SXML documents are created by DBMS_METADATA.
FETCH_CLOB Returns a CLOB showing the differences between the two documents specified by ADD_DOCUMENT
CLOSE Invalidates the handle returned by OPENC and cleans up associated state

You see in this table that SXML documents are created. These DBMS_METADATA subprograms actually:
• OPEN
• ADD_TRANSFORM
• SET_FILTER
• SET_REMAP_PARAM
• SET_TRANSFORM_PARAM
• SET_PARSE_ITEM
• SET_COUNT
• GET_QUERY
• FETCH_DDL
• CLOSE

Design
A package which is a layer on top of the DBMS_METADATA packages is the main program.

I used one of my favourite PL/SQL constructions, a pipelined function, to supply arguments like object type, object name, source schema, etcetera to queries showing differences.

Here an excerpt of the package specification:

subtype t_metadata_object_type is varchar2(30); /* longer than all_objects.object_type%type */

subtype t_object_name is varchar2(4000); /* normally 30 is enough but some synonym names can be very long (see SYS.KU$_SYNONYM_VIEW), just like XML schema names */

-- Some objects have lines longer than 4000 characters like
-- COMP_STK.STK_V_SPT_SOORT_BEDRAGEN.  Since querying a CLOB through a database
-- link poses problems, you have to split the lines in pieces of 4000 characters
-- (the limit for a SQL varchar2 type).  In total you get
-- 32767 (the maximal length of dbms_sql.varchar2a) which should be enough.
type t_ddl_line_rec is record(
   object_schema all_objects.owner%type
  ,object_type   t_metadata_object_type
  ,object_name   t_object_name
  ,grantee       all_users.username%type default null
  ,ddl#          integer /* DDL statement number (indexes and comments for
                            a table may generate several statements) */
  ,line#         integer /* DDL line number starting from 1 within
                            (object_schema,object_type,object_name,grantee,ddl#) */
  ,text1         varchar2(4000) default null
  ,text2         varchar2(4000) default null
  ,text3         varchar2(4000) default null
  ,text4         varchar2(4000) default null
  ,text5         varchar2(4000) default null
  ,text6         varchar2(4000) default null
  ,text7         varchar2(4000) default null
  ,text8         varchar2(4000) default null
  ,text9         varchar2(767) default null);

type t_ddl_line_tab is table of t_ddl_line_rec;

function display_ddl_schema_diff
(
  p_object_type in varchar2 default null
 ,p_object_names in varchar2 default null
 ,p_object_names_include in natural default null
 ,p_schema_source in varchar2 default user
 ,p_schema_target in varchar2 default user
 ,p_network_link_source in varchar2 default null
 ,p_network_link_target in varchar2 default null
) return t_ddl_line_tab pipelined;

Usage
The query with the pipelined function is executed in SQL*Plus.

Here an excerpt of the SQL*Plus script:

var c refcursor

begin
  open :c for
    select  t.*
    from    table
            ( comp_stm.stm_ddl_util.display_ddl_schema_diff
              ( p_schema_source =&gt; '&amp;1'
              , p_schema_target =&gt; '&amp;3'
              , p_network_link_source =&gt; '&amp;2'
              , p_network_link_target =&gt; '&amp;4'
              , p_object_type =&gt; '&amp;5'
              , p_object_names =&gt; '&amp;6'
              , p_object_names_include =&gt; to_number('&amp;7')
              )
            ) t
    ;
end;
/

print c

Technical issues
During development I encountered several technical issues. I will show them to you…

Migration order
When the migration scripts are run, the execution of the various DDL statements must be without errors. For instance, you have to create a table first before you can create an index on it. The DBMS_METADATA_DIFF package is object based, not set based, so you do have to figure out the order yourself. This was solved using the Oracle dictionary.

AUTHID CURRENT_USER
The base package is created like:

create or replace package comp_stm.stm_ddl_util authid current_user is

This fulfils the requirement mentioned earlier: the privileges of the account which runs the tool must be sufficient to lookup any schema object. You only need to grant the SELECT_CATALOG_ROLE role to the account which runs the tool.

CLOBs and remote databases
A query using a database link cannot include CLOBs, so

select  from object@db

does not work.

To get around this, I used 8 fields of 4000 characters each and 1 field of 767 characters. This sums up to 32767 which should be sufficient for one line of code.

Pipelined functions and remote databases
The following query syntax is not accepted:

select * from table(comp_stm.display_ddl_schema_diff(...)@db)

The work-around for this was to supply the arguments first to a helper procedure set_display_ddl_schema_diff() and then to use a different pipelined helper function get_display_ddl_schema_diff() in a view:

create or replace view comp_stm.stm_v_display_ddl_schema as
select  t.object_schema
,       t.object_type
,       t.object_name
,       t.grantee
,       t.ddl#
,       t.line#
,       t.text1
,       t.text2
,       t.text3
,       t.text4
,       t.text5
,       t.text6
,       t.text7
,       t.text8
,       t.text9
from    table(comp_stm.stm_ddl_util.get_display_ddl_schema) t

Now you can rewrite the query like this (after supplying the arguments first):

select * from comp_stm.stm_v_display_ddl_schema@db

Global database links
The URL http://dba.stackexchange.com/questions/93938/why-am-i-able-to-query-a-remote-database-without-a-dblink describes very well how this works.

In short it tells us that when a database link is not known as a private or public database link but it is known via LDAP, that in that case a connection to the remote database is made using the same account and password. So when you use for instance OPS$ accounts (which have no password), you are done: no need to create database links on all databases used for creating the migration scripts.

DBMS_METADATA_DIFF and SYNONYM
DBMS_METADATA_DIFF generates a DDL statement for a SYNONYM even though it has not changed. I solved this by creating the DDL statements via DBMS_METADATA and compare them later on.

DBMS_METADATA_DIFF and TYPE_BODY
This one gave a run-time error. Also solved by using DBMS_METADATA and compare them later on.

DBMS_METADATA_DIFF and VIEW
DBMS_METADATA_DIFF does not generate a CREATE OR REPLACE VIEW as you would expect when a view has changed. Maybe it will work in Oracle 12c. I solved this by creating the DDL statements via DBMS_METADATA and compare them later on.

COMMENTs
A COMMENT is for DBMS_METADATA.SET_FILTER not a base object but a dependent object of a TABLE or VIEW. In order to determine all COMMENTs of a schema, you first have to know all TABLEs and VIEWs of the schema and then to determine the COMMENTs of those TABLEs and VIEWs.

Public synonyms
Only public synonyms who point to schema objects which are already part of the comparison object set must be compared too. For DBMS_METADATA.SET_FILTER you have to specify as schema PUBLIC and the base schema must be the object schema.

DBMS_METADATA_DIFF sometimes returns comments
An example for a SEQUENCE:

-- ORA-39305: Cannot alter attribute of sequence: START_WITH

Those lines are ignored. In Oracle 12c this has changed because then the sequence start_with clause can be modified.

Creation of UTF8 files
UTF8 is the standard encoding of the tool Flyway and UTF8 is very portable. That is why I have decided to use it as the standard for creating migration scripts.

In order to transform the database character set to an UTF8 client character set, you must set the Oracle client environment variable NLS_LANG to _.UTF8. Now the Oracle client (SQL*Plus in this case) will retrieve the data in UTF8. After the retrieval, the files must have to be created too in UTF8 encoding. Using the script language Perl this was easy. Perl allows us to set the environment variable NLS_LANG, call SQL*Plus and create UTF8 files.

Whitespace differences when promoting
Sometimes database object code (packages for instance) without tabs was converted into code with tabs so when the migration script was run there was still a difference. This has to do with the SET TAB ON setting in SQL*Plus. Turning it to OFF solved this problem. In general it is a good idea not to use tabs in database code anyway.
Another issue was that trailing whitespace was not preserved during promotion. The problem is that in SQL*Plus you can either print a line (a query column) of size N either completely or trimmed (SET TRIMSPOOL ON), but neither way gives you the exact line. The solution for this was to calculate the line size in the SQL*Plus script and supply that as well to the Perl script. Now the Perl script could create a line of exact that length.

In the next Blog I will show you how to make deployments more robust which covers things like 24×7 deployments and rollback of deployments.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Blog Continuous Delivery and the Oracle database (II) appeared first on AMIS Oracle and Java Blog.


Parse JSON Array in SQL and PL/SQL – turn to a Nested Table

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Transferring data between technologies and application tiers is done using various formats – binary, native on the one hand and open, text based such as CSV, XML and JSON on the other. Use of JSON is rapidly growing as a growing number of platforms and technologies provides support for JSON.

I recently was working on a Node.js application that exposed a REST API to HTTP consumers. The consumers could send POST requests with a body that could hold various complex request parameters. The Node.js application used the Oracle DB Driver for Node to connect to the database and invoke PL/SQL units to retrieve data from which the HTTP Response would be constructed. One of the input parameters to the PL/SQL procedure was a string that could contain a JSON array. This allowed transfer of potentially many parameter values.

A JSON array is a string constructed like this:

 ["mit", "nach", "nebst", "bei"] 

To PL/SQL, this is just a string with a single value. My challenge was to turn this single value into the multiple values that were intended.

Oracle Database 12c -12.1.0.2 – introduced support for JSON. Part of this support is the operator JSON_TABLE that can be used in a SQL query to turn [parts of]a JSON document into relational data. The query that does the trick for a simple JSON array with Scalar values looks like this:

SELECT value
FROM json_table('["content", "duration"]', '$[*]'
COLUMNS (value PATH '$'
)
)

Or more general:

with json as
( select '["mit", "nach", "nebst", "bei"]' doc
from dual
)
SELECT value
FROM json_table( (select doc from json) , '$[*]'
COLUMNS (value PATH '$'
)
)

I used this query in a simple PL/SQL function, that is invoked with a VARCHAR2 holding a JSON array and returns a table of VARCHAR2 with the individual values in the JSON array:

create or replace
FUNCTION json_array_to_string_tbl (
p_json_array IN VARCHAR2
) RETURN string_tbl_t
is
l_string_tbl string_tbl_t:= string_tbl_t();
begin
if p_json_array is not null and length(p_json_array)>0
then
SELECT value
bulk collect into l_string_tbl
FROM json_table( p_json_array, '$[*]'
COLUMNS (value PATH '$'
)
);
end if;
return l_string_tbl;
end json_array_to_string_tbl;

The definition of the STRING_TBL_T is quite simply:

create or replace
type string_tbl_t as table of varchar2(2000);

This function can used for example like this:

select column_value
from table(json_array_to_string_tbl('["mit", "nach", "nebst", "bei"]'))

In cases where the JSON array does not hold scalar values but instead JSON objects, such as:

[{"firstName": "Tobias", "lastName":"Jellema"},{"firstName": "Anna", "lastName":"Vink"} ]

A similar approach can be used.

And the SQL query could read:

with json as
( select '[{"firstName": "Tobias", "lastName":"Jellema"},{"firstName": "Anna", "lastName":"Vink"} ]' doc
  from   dual
)
SELECT first_name
,      last_name
FROM  json_table( (select doc from json) , '$[*]'
                COLUMNS ( first_name PATH '$.firstName'
                        , last_name PATH '$.lastName'
                        )
               )

In this case, an ADT – Abstract Data Type – (aka UDT) could be defined to bring some structure from JSON to SQL and PL/SQL as well:

create type person as object
( first_name varchar2(50)
, last_name  varchar2(50)
);

And the SQL query could read:

with json as
( select '[{"firstName": "Tobias", "lastName":"Jellema"},{"firstName": "Anna", "lastName":"Vink"} ]' doc
  from   dual
)
SELECT person( first_name , last_name) person
FROM  json_table( (select doc from json) , '$[*]'
                COLUMNS ( first_name PATH '$.firstName'
                        , last_name PATH '$.lastName'
                        )
               )

This shows how easy it is to transfer multi-value JSON based data structures to PL/SQL as simple string values and have them interpreted in PL/SQL.

Try out the statements shown in this article on Oracle LiveSQL .

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Parse JSON Array in SQL and PL/SQL – turn to a Nested Table appeared first on AMIS Oracle and Java Blog.

Continuous Delivery and the Oracle database (III)

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

In this series of blogs about Continuous Delivery and the Oracle database, I describe how to automate deployments (installations). In the previous two Blogs I have described the tools and techniques used to create and install migration scripts.

In this Blog I will describe the ‘(un)happy flow’ for a database deployment: what to do if the deployment (installation) is correct (the ‘happy flow’) and what to do if it goes wrong (the ‘unhappy flow’)?

Introduction
One might ask what could be the problem if the deployment succeeds. But under normal circumstances you are not the only one on the database. Other sessions must be able to continue during a deployment. For instance it may be impossible to deploy packages that are used by another session. The installation process will have to wait on the running session and that may cause a time-out and thus a failed installation, even though the previous deployment in test went well. And even if you’re lucky and you can compile a package without problems, other sessions may encounter this very well known error:

ORA-04068: existing state of packages has been discarded

A solution is of course not to use ‘package state’ but I do not think management is very happy with such a statement because it will take too much development and test effort to remove all package state. Later in this Blog I will show you how to solve this problem without recoding.

And what if the deployment just fails? Restoring a backup is very drastic and not always a solution: other sessions may have changed data and/or code that should not be restored at all. And Oracle Flashback technology is not sufficient because it can only restore tables (data). It cannot restore code.

What we actually need is to go back to a previous version of the code. The data should generally stay as it is. Since Oracle 11g release 2 there is a nice solution: Edition Based Redefinition (EBR). Tom Kyte called it already a killer app.

Edition Based Redefinition
I quote Lucas Jellema from his Blog Quick introduction of what and why of Oracle Database Edition Based Redefinition:

“For the 11g release of the database Oracle went beyond the reduction of downtime and the ability to redefine database objects with a minimum of unavailability. It implemented in the database a mechanism that is very similar to WebLogic’s production redeployment: the ability to have two versions of the application (or in this case: a coherent set of database objects) live at the same time, the old version servicing longer running sessions that started before the new version went live. This mechanism is called Edition Based Redefinition. It is illustrated in the next figure: the two versions 1 and 2 of Application X – temporarily running in parallel on WebLogic just after production redeployment – have need different versions of the database objects. Version 1 relies on the base release of the database objects while version 2 of the application uses the versions of database objects shipped in Release 2, complemented with the objects from version 1 that were not changed for version 2.

EBR

The notion of a ‘release of database objects’ was introduced in Oracle Database 11gR2 and is called ‘an edition’. Editions are almost parallel universes in which the same database objects – such as views, packages, types – from the same schema can exist in with different definitions. For example: package SALARY_MANAGEMENT in Schema SCOTT can exist multiple times – for example once in the Base Edition, once in Edition Release 2 and once in Edition Release 4.”

Tom Kyte has written several articles about EBR that I can recommend very much:

  1. A Closer Look at the New Edition, Oracle magazine, January 2010
  2. Edition-Based Redefinition, Part 2, Oracle magazine, March 2010
  3. Looking at Edition-Based Redefinition, Part 3, Oracle magazine, May 2010

And this is the Oracle White Paper about EBR:
http://www.oracle.com/technetwork/articles/grid/edition-based-redefinition-1-133045.pdf

This Oracle White Paper shows that ‘package state’ is linked to a session in combination with an edition. So if you compile a package in a new edition, old sessions do not suffer from this error anymore:

ORA-04068: existing state of packages has been discarded

Requirements
The following requirements should be sufficient for a reliable deployment based on EBR:

  1. A deployment must be regarded as one transaction; either everything goes well and the (last) new edition becomes active (i.e. the new database default edition) or there is a problem and the database default edition remains the same, i.e. no difference with before.
  2. A deployment that is aborted may not influence other simultaneous deployments.
  3. It is not necessary to be able to undo a deployment after it has succeeded. Why would you want to undo a successful deployment anyway? And given the fact that database code and/or data may have changed after the deployment I do not see a simple solution. So undoing a deployment is only possible during or just after a failing deployment.

Design
Like I already described in my previous Blogs, the deployment is executed by Flyway that runs a series of migration scripts in the correct order (and only if they have never been executed before on that database).

Flyway administration
Every successful migration step (one script) is registered by Flyway in a metadata table called “schema_version”. If a step fails, it is registered as such (the step is flagged as an error) and the deployment is aborted. Now you can solve the error and restart Flyway. Flyway first has to repair its metadata table (i.e. remove the faulty step, see http://flywaydb.org/documentation/command/repair.html) and then you can continue with the failing step. So a migration script that caused an error will be re-executed later on.

Flyway call-backs
Flyway has functionality that is essential in combination with EBR: SQL call-backs (see http://flywaydb.org/documentation/callbacks.html). These SQL call-backs are SQL scripts that are executed before/after individual migration steps and the complete migration (deployment). The programmer can add code to support or check the deployment.

Happy flow
Using Flyway this will become the ‘happy flow’:

  1. Flyway executes the migration scripts (DDL and/or DML) for one schema.
  2. After every migration script you can verify the deployment, for instance by recompiling the schema and checking that all schema objects are valid.

In case of deployment errors this will not suffice.

Possible errors
What kind of errors can occur and how can Flyway help?

  • The execution of a migration script fails with an (Oracle) error.
    Flyway will issue a rollback but this will only rollback DML that has not been committed. Please note that Oracle implicitly commits a DDL statement. This fact means that it you should first execute all your DDL migration scripts and only then just one DML migration script (since Flyway commits after every script). Now when something goes wrong during the last DML step, a rollback will restore the data (provided you do not commit in the DML script yourself).
  • A verification step fails.
    Maybe you have installed an invalid package. Or maybe the code quality is not as it should be. You can use the SQL call-backs afterEachMigrate (after every step) or afterMigrate (at the end) to verify.

(Un)happy flow
How can you undo a failed deployment? Or to be more exact, how is it possible that the new version does not become active after a failure. In the introduction I already said that EBR could help. But how can it help?

Let us investigate some examples to determine the best strategy.
The target database schema S1 is empty and we want to let Flyway install release R1 containing scripts V1.sql and V2.sql creating views V1 and V2 respectively. Script V2.sql contains an error.

Example 1: one edition per migration and second script fails
At the start Flyway creates a new edition E0 using the beforeMigration call-back. Next Flyway runs the scripts V1.sql and V2.sql. Now when no script would fail, edition E0 will become the new edition (set in call-back afterMigration). But, unfortunately the second script fails. Well, we can just drop edition E0 and restart Flyway after correcting the error, isn’t it? No luck, Flyway has already recorded that V1.sql was okay, so it will never rerun that script. Hence a rerun will now create edition E0 again however it will contain just view V2. View V1 has disappeared due to dropping edition E0 and the cleverness of Flyway.

Example 2: one edition per migration and second script fails
Now Flyway will create an edition before each migration step (using call-back beforeEachMigrate), E1 before step 1 and E2 before step 2. Now when the second script fails, you just drop edition E2 and correct the error and rerun Flyway. That will do the trick. The administration of Flyway is in sync with the administration of the database. Please note that for other sessions the database still looks the same when the migration fails: the default database edition they use has not changed because it is only changed if the whole migration succeeds.

Example 3: as example 2 but now another release is installed after the failure
The deployment to S1 fails and someone else installs release R2 to schema S2 using the same approach as in example 2. That release contains scripts V3.sql and V4.sql also creating views. These scripts contain no errors so the installation succeeds. Flyway has now created editions E1, E2 (containing an error), E3 and E4 and we started with the Oracle default ORA$BASE.

This is the list: ORA$BASE -> E1 -> E2 -> E3 -> E4

Since you can only drop editions at the head or the tail, edition E2 can not be dropped. So release R1 can never be corrected.

In order to overcome all these problems we implement the following rules:

  1. before every migration step a new edition is created (if the edition already exists due to a previous error it will be dropped first);
  2. if the migration fails, the error must be corrected and the same migration must restart (Flyway will restart it at the point where it failed);
  3. there may be only one migration active in order to keep the editions created linked to each other;
  4. only when a migration succeeds:
  • the default database edition changes to the last edition created;
  • another migration may commence.

This list of detailed actions describes how to handle the ‘unhappy flow’:

Note: the developer has to ensure that DDL is executed first and only then migration with DML. In case of an error Flyway will never have committed the data.

# Action Phase Remark
1 Make sure the deployment schema becomes ’editions enabled’. Once in a lifetime You cannot undo this!
2 Log on using the deployment schema. Logon Only the deployment user needs the right to change his objects.
3 Ensure there is only one deployment session active (per database). Before deployment (using Flyway call-back beforeMigrate) Use an exclusive installation lock (via DBMS_LOCK). This is necessary to maintain integrity. If another deployment session has the lock, wait for a minute before a time-out.
4 Check whether the last deployment went well or whether you continue for the same schema. If not, abort. Idem If you do not correct a faulty deployment for schema A and you start a deployment for another schema B that succeeds, you end up with an incomplete deployment for schema A (because the latest edition becomes active). This is wrong.
 
5 Create an edition and grant it to the deployment schema. For every migration (using Flyway call-back beforeEachMigrate) The edition name depends on the schema name and installation sequence. For example APPL_RELATIE$100 for migration number 100 in schema APPL_RELATIE.
6 If the previous step fails because the edition already exists, you have to drop the edition and repeat the previous step. Idem Apparently the previous deployment failed and it is corrected now. Please note that when a deployment session fails, it cannot always recover itself (think of a database shutdown abort). So you have to do it later.
7 Activate the edition for this deployment session. Idem Now you can create ‘editionable’ objects in the deployment schema (if the schema is ’editions enabled’).
8 Execute migration scripts. Each migration (using Flyway) Objects are created in the new edition (they become actual in that edition). EBR will ensure that package X becomes actual if package Y is installed and package X depends on Y.
9 Check the deployment. After each migration (using Flyway call-back afterEachMigrate)
  1. Compile all invalid schema objects.
  2. Raise an exception if there is any invalid object that depends on this schema.

Invalid objects are not allowed, so abort the deployment.

 
10 Grant the current session edition to PUBLIC. After deployment (using Flyway call-back afterMigrate)
11 Make the current edition the database default. Idem The deployment has succeeded.
12 Release the installation lock. Idem


So only when everything goes well, the latest edition (see view ALL_EDITIONS) is equal to the database default edition (see view DATABASE_PROPERTIES).
If anything goes wrong (even when the deployment session is aborted unexpectedly), that will not be the case and you can recover by executing the same deployment that – thanks to Flyway – will continue with the failed step.

Please note once more that it is not possible to deploy to another schema after a faulty deployment: you have to correct the error first.

Conclusion
The combination of Oracle Edition Based Redefinition and the open source tool Flyway enables us to execute a deployment without affecting other users. And as a big bonus it enables us to undo the deployment in case of errors. That really sounds like 24×7, 🙂

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Continuous Delivery and the Oracle database (III) appeared first on AMIS Oracle and Java Blog.

Virtual Private Database…

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Some time ago I was asked to assist in fixing or at least finding the cause of a performance problem. The application ran fine until the Virtual Private Database (VPD) policy was applied. Oracle claims there should be near zero impact on your application when you implement VPD, then how is this possible?

First of all, the policy applied was a rather complex one. A complex query should be executed to determine if the current user has access to the record. Let’s say this query takes up a second, then I would expect my query to run about a second slower, maybe two. But the query took several minutes to complete when the VPD policy was applied. This didn’t make sense to me, so I decided to find out what was really happening.
To do this, I opened up my sandbox database to try and recreate this situation.
First I need to create two new users

create user vpd1 identified by vpd1
/
grant connect, resource to vpd1
/
create user vpd2 identified by vpd2
/
grant connect, resource to vpd2
/

Then I created a simple table to hold the data that should be protected by the VPD policy:

drop table emp purge
/
create table emp
       (empno number(4) not null,
        ename varchar2(10),
        job varchar2(9),
        mgr number(4),
        hiredate date,
        sal number(7, 2),
        comm number(7, 2),
        deptno number(2))
;
insert into emp values (7369, 'SMITH',  'CLERK',     7902, to_date('17-12-1980', 'DD-MM-YYYY'),  800, null, 20);
insert into emp values (7499, 'ALLEN',  'SALESMAN',  7698, to_date('20-02-1981', 'DD-MM-YYYY'), 1600,  300, 30);
insert into emp values (7521, 'WARD',   'SALESMAN',  7698, to_date('22-02-1981', 'DD-MM-YYYY'), 1250,  500, 30);
insert into emp values (7566, 'JONES',  'MANAGER',   7839, to_date('02-04-1981', 'DD-MM-YYYY'),  2975, null, 20);
insert into emp values (7654, 'MARTIN', 'SALESMAN',  7698, to_date('28-09-1981', 'DD-MM-YYYY'), 1250, 1400, 30);
insert into emp values (7698, 'BLAKE',  'MANAGER',   7839, to_date('01-05-1981', 'DD-MM-YYYY'),  2850, null, 30);
insert into emp values (7782, 'CLARK',  'MANAGER',   7839, to_date('09-06-1981', 'DD-MM-YYYY'),  2450, null, 10);
insert into emp values (7788, 'SCOTT',  'ANALYST',   7566, to_date('09-12-1982', 'DD-MM-YYYY'), 3000, null, 20);
insert into emp values (7839, 'KING',   'PRESIDENT', null, to_date('17-11-1981', 'DD-MM-YYYY'), 5000, null, 10);
insert into emp values (7844, 'TURNER', 'SALESMAN',  7698, to_date('08-09-1981', 'DD-MM-YYYY'),  1500,    0, 30);
insert into emp values (7876, 'ADAMS',  'CLERK',     7788, to_date('12-01-1983', 'DD-MM-YYYY'), 1100, null, 20);
insert into emp values (7900, 'JAMES',  'CLERK',     7698, to_date('03-12-1981', 'DD-MM-YYYY'),   950, null, 30);
insert into emp values (7902, 'FORD',   'ANALYST',   7566, to_date('03-12-1981', 'DD-MM-YYYY'),  3000, null, 20);
insert into emp values (7934, 'MILLER', 'CLERK',     7782, to_date('23-01-1982', 'DD-MM-YYYY'), 1300, null, 10);
commit
/
drop table emp_vpd purge
/
create table emp_vpd as select * from emp
/
commit
/

And of course I need to grant access to this table to the newly created users:

grant all on emp_vpd to vpd1
/
grant all on emp_vpd to vpd2
/

On the table I need to create a policy function so I create a package (which mimics the customers package, just simpler) to do this:

create or replace package emp_vpd_policy as
  function first_policy(owner_in   in varchar2
                       ,objname_in in varchar2) return varchar2;
  function allowed(empno_in  in number
                  ,deptno_in in number) return number;
end emp_vpd_policy;
/
sho err
create or replace package body emp_vpd_policy as
  function first_policy(owner_in   in varchar2
                       ,objname_in in varchar2) return varchar2 is
  begin
    dbms_output.put_line('first policy');
    if (user = 'VPD1') then
      return 'emp_vpd_policy.allowed(emp_vpd.empno, emp_vpd.deptno)=10';
    elsif user = 'VPD2' then
      return 'emp_vpd_policy.allowed(emp_vpd.empno, emp_vpd.deptno)=20';
    else
      return '1=1';
    end if;
  end first_policy;
  function allowed(empno_in  in number
                  ,deptno_in in number) return number is
  begin
    dbms_output.put_line('emp_vpd_policy.allowed(' || empno_in || ',' || deptno_in || ')');
    return deptno_in;
  end allowed;
end emp_vpd_policy;
/
sho err

and then protect the EMP_VPD table using a policy:

begin
  sys.dbms_rls.add_policy(object_schema => 'DEMO'
                         ,object_name => 'EMP_VPD'
                         ,policy_name => 'EMP_VPD_SEL'
                         ,function_schema => '&myuser'
                         ,policy_function => 'EMP_VPD_POLICY.FIRST_POLICY'
                         ,statement_types => 'SELECT');
end;
/

The package will show what will happen when I perform a select on the table:

conn vpd1/vpd1
set serveroutput on size unlimited
select * from demo.emp_vpd
/
EMPNO ENAME      JOB         MGR HIREDATE          SAL      COMM DEPTNO
----- ---------- --------- ----- ----------- --------- --------- ------
 7782 CLARK      MANAGER    7839 6/9/1981      2450.00               10
 7839 KING       PRESIDENT       11/17/1981    5000.00               10
 7934 MILLER     CLERK      7782 1/23/1982     1300.00               10
first policy
first policy
emp_vpd_policy.allowed(7369,20)
emp_vpd_policy.allowed(7499,30)
emp_vpd_policy.allowed(7521,30)
emp_vpd_policy.allowed(7566,20)
emp_vpd_policy.allowed(7654,30)
emp_vpd_policy.allowed(7698,30)
emp_vpd_policy.allowed(7782,10)
emp_vpd_policy.allowed(7788,20)
emp_vpd_policy.allowed(7839,10)
emp_vpd_policy.allowed(7844,30)
emp_vpd_policy.allowed(7876,20)
emp_vpd_policy.allowed(7900,30)
emp_vpd_policy.allowed(7902,20)
emp_vpd_policy.allowed(7934,10)

In my case this is done rather quickly, there’s almost no difference in timing for the query with or without the policy applied. But as you can see, the policy is executed for each and every record that is being checked. Well, not really the policy itself, but the function that is defined in the policy. So if this function takes a lot of time and your table has a lot of records then the query will run for a very long time. There has got to be a better way to do this.
Let’s analyze what happens, the actual policy is executed twice. What if we use this architecture to our benefit. In the first pass we can setup some in memory data structure to hold whatever we need, this might take some time and then in the second pass we can use this data to be used in the actual check.
First we drop the policy so we can create a new one:

begin
  sys.dbms_rls.drop_policy(object_schema => '&myuser'
                          ,object_name => 'EMP_VPD'
                          ,policy_name => 'EMP_VPD_SEL');
end;
/

For our implementation we need a Nested Table type to be created in the database:

create or replace type empnos_tt is table of number(4)
/

Then we create a new package to hold the policy function.

create or replace package emp_vpd_pp as
  function sel( owner_in   in varchar2
              , objname_in in varchar2
              ) return varchar2;
  function read_g_empnos return empnos_tt;
end emp_vpd_pp;
/
sho err

The function SEL will be used in the policy. The function READ_G_EMPNOS is needed to retrieve the data in the package variable. Then the actual implementation of the package:

create or replace package body emp_vpd_pp as
  g_empnos empnos_tt;
  beenhere boolean := false;
  function sel( owner_in   in varchar2
              , objname_in in varchar2
              ) return varchar2 is
  begin
    if not(beenhere) then
      if user = 'VPD1' then
        begin
          select emp.empno
            bulk collect into g_empnos
            from emp
           where emp.deptno = 10;
         exception
           when others then
           dbms_output.put_line(sqlerrm);
         end;
      elsif user = 'VPD2' then
        begin
          select emp.empno
            bulk collect into g_empnos
            from emp
           where emp.deptno = 20;
         exception
           when others then
           dbms_output.put_line(sqlerrm);
         end;
      end if;
    end if;
    beenhere := not(beenhere);
    if ((user = 'VPD1') or (user = 'VPD2')) then
      return 'emp_vpd.empno in (select column_value
                                  from table(emp_vpd_pp.read_g_empnos))';
    else
      return '1=1';
    end if;
  end sel;
  function read_g_empnos return empnos_tt
    is
    begin
      return (g_empnos);
    end;
begin
  beenhere := false;
end emp_vpd_pp;
/
sho err

In the initialization section of the package we initialize the Boolean variable. Then, when the policy function is executed for the first time (per query) we select the column values we need and save that into the package variable. The second time we execute the policy function we use the values saved in the predicate that is being added.

begin
  sys.dbms_rls.add_policy(object_schema => 'DEMO'
                         ,object_name => 'EMP_VPD'
                         ,policy_name => 'EMP_VPD_SEL'
                         ,function_schema => 'DEMO'
                         ,policy_function => 'EMP_VPD_PP.SEL'
                         ,statement_types => 'SELECT');
end;
/

Notice the predicate with the use of the Nested Table is executed always, but the Nested Table is only filled up in the first execution of the policy function. Using this technique the database only has to execute the expensive query once and its result can be used multiple times at almost no cost.
Using this policy function has exactly the same result, but the execution improved dramatically. Using this technique the database only has to execute the expensive query once per query instead of for every row.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Virtual Private Database… appeared first on AMIS Oracle and Java Blog.

Virtual columns

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Maarten wrote a post on Virtual Columns in the oracle database. I read a blogpost on preventing people issuing SELECT * on a table. This was done in a different database, so I decided to try it out in my Oracle Database.
First I create a simple table:

create table demo.emp_nuke
(
  empno  number(4) not null
, ename  varchar2(10)
, sal    number(7,2)
, deptno number(2)
, blowitup number generated always as (1/0) virtual
)
/

and I add some data to it:

begin
  insert into demo.emp_nuke(empno, ename, sal, deptno)
                       values (7499, 'ALLEN',  1600, 30);
  insert into demo.emp_nuke(empno, ename, sal, deptno)
                       values (7521, 'WARD',   1250, 30);
  insert into demo.emp_nuke(empno, ename, sal, deptno)
                       values (7654, 'MARTIN', 1250, 30);
  insert into demo.emp_nuke(empno, ename, sal, deptno)
                       values (7698, 'BLAKE',  2850, 30);
  insert into demo.emp_nuke(empno, ename, sal, deptno)
                       values (7844, 'TURNER', 1500, 30);
  insert into demo.emp_nuke(empno, ename, sal, deptno)
                       values (7900, 'JAMES',  950,  30);
end;
/

Then, when I issue this statement

select *
  from demo.emp_nuke e
/

I get an error:

ORA-01476: divisor is equal to zero

But I can still access the data from the table as long as I don’t include the virtual column:

select e.empno, e.ename, e.sal, e.deptno
  from demo.emp_nuke e
/
EMPNO ENAME            SAL DEPTNO
----- ---------- --------- ------
 7499 ALLEN        1600.00     30
 7521 WARD         1250.00     30
 7654 MARTIN       1250.00     30
 7698 BLAKE        2850.00     30
 7844 TURNER       1500.00     30
 7900 JAMES         950.00     30
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Virtual columns appeared first on AMIS Oracle and Java Blog.

Benefits of a Canonical Data Model (CDM) in a SOA environment

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Introduction

The last few years I’ve been working in several SOA related projects, small projects as well as quite large projects. Almost all of these projects use a Canonical Data Model (CDM). In this post I will explain what a CDM is and point out what the benefits are of using it in an integration layer or a Service Oriented (SOA) environment.

What is a Canonical Data Model?

The Canonical Data Model (CDM) is a data model that covers all data from connecting systems and/or partners. This does not mean the CDM is just a merge of all the data models. The way the data is modelled will be different from the connected data models, but still the CDM is able to contain all the data from the connecting data models. This means there is always a one way, unambiguous translation of data from the CDM to the connecting data model and vice versa.
A good metaphor for this in spoken languages is the Esperanto language. Each living, existing spoken language can be translated to the constructed Esperanto language and vice versa.
In a CDM data translation, the translation is not restricted to the way the data is modelled, but will also be a translation of the values of the data itself.

Example Data

Let’s take as an example the country values for the US and The Netherlands in four connecting data models. Three of these models are ‘based’ on the English language and the last one on the Dutch language. The first two data models are of type XML, the third one is CSV and the last one is a JSON type model:

  1. <location>
      <street>A-Street</street>
      <number>123a</number>
      <city>Atown</city>
      <country>United States</country>
      <continent>North America</continent>
    </location>
    <location>
      <street>B-Straat</street>
      <number>456b</number>
      <city>Bdam</city>
      <country>The Netherlands</country>
      <continent>Europe</continent>
    </location>
  2. <Address zip_code="93657">A-Street 123a, 93657, Atown</Address>
    <Address zip_code="1234 AB" country_code="nl">B-Straat 456b, Bdam</Address>
  3. Country;State;City;Street;Number;
    USA;California;Atown;A-Street;123a;
    NLD;;Bdam;B-Straat;456b;
  4. {"adres":
      {"landcode":1, "postcode":"93657", "woonplaats": "Atown", "straat": "A-Street", "nr":"123a"}
    },
    {"adres":
      {"landcode":31, "postcode":"1234 AB", "woonplaats": "Bdam", "straat": "B-Straat", "nr":"456b"}
    }

As you can see, there are not only four different ways of data modelling (two XML types, a CVS and a JSON type), but also four different values for the same country. The second example does not even have a value for the Unites States, because it defaults to “us”.
Despite of the differences, these examples of different data models contain the same information. When a CDM is defined, it should be able to contain all data of these models. Note that the data items continent, state and zipcode do not exist in all the data models. Also note that there is no value for state in case of a Dutch address (example 3).
P.S. There might even be more connecting systems that do not do anything with addresses, so their data model does not contain address data.

Creating a Canonical Data Model

When a CDM model is created, it is wise to be flexible and ready for future changes and extensions. Create a CDM that fits best in the integration software being used. Most likely this will be a XML type data model. However, JSON is increasingly supported by integration software and is becoming more popular because of its reduced size and the fact that is is used in front end technology, especially for mobile devices.

Let’s select XML for the CDM in this example and English based, which makes it easier in case non-Dutch developers have to work with it.
In our example the address data in our CDM can look like this:

<Addresses>
  <Address>
    <Street>A-Street</Street>
    <Number>123a</Number>
    <ZipCode>93657</ZipCode>
    <City>Atown</City>
    <State>California</State>
    <CountryCode>US</CountryCode>
    <ContinentCode>NA</ContinentCode>
  </Address>
  <Address>
    <Street>B-Straat</Street>
    <Number>456b</Number>
    <ZipCode>1234 AB</ZipCode>
    <City>Bdam</City>
    <CountryCode>NL</CountryCode>
    <ContinentCode>EU</ContinentCode>
  </Address>
</Addresses>

For the technical reader: the definition of this XML fragment (XSD):

<element name="Addresses" type="tns:tAddresses"/>
<complexType name="Addresses">
  <sequence>
    <element name="Address" type="tns:tAddress" minOccurs="0" maxOccurs="unbounded"/>
  </sequence>
</complexType>
<complexType name="tAddress">
  <sequence>
    <element name="Street" type="string" minOccurs="0" maxOccurs="1"/>
    <element name="Number" type="string" minOccurs="0" maxOccurs="1"/>
    <element name="ZipCode" type="string" minOccurs="0" maxOccurs="1"/>
    <element name="City" type="string" minOccurs="0" maxOccurs="1"/>
    <element name="State" type="string" minOccurs="0" maxOccurs="1"/>
    <element name="CountryCode" type="tns:tCountryCode" minOccurs="0" maxOccurs="1"/>
    <element name="ContinentCode" type="tns:tContinentCode" minOccurs="0" maxOccurs="1"/>
  </sequence>
</complexType>
<simpleType name="tCountryCode">
<!-- no fixed enum, because countries is not a fixed set in time. -->
  <restriction base="string">
    <pattern value="[A-Z]{2}"/>
  </restriction>
</simpleType>
<simpleType name="tContinentCode">
  <restriction base="string">
      <enumeration value="AF"/><!-- Africa -->
      <enumeration value="AN"/><!-- Antarctica -->
      <enumeration value="AS"/><!-- Asia -->
      <enumeration value="EU"/><!-- Europe -->
      <enumeration value="NA"/><!-- North America -->
      <enumeration value="OC"/><!-- Oceania -->
      <enumeration value="SA"/><!-- South America -->
  </restriction>
</simpleType>

This XML data structure (model) contains all the data items available in our examples. When it comes to flexibility, it is wise to use elements only and no attributes in XML. Usage of elements only makes the model more flexible and ready for future changes. Do not use ‘mixed content’ elements, meaning elements with data as well as child elements. An element is either a container element containing child elements or an element only containing data. Create a ‘plural container’ element for all elements that might (in future) occur more than once. Make the plural element single and obligated (min=1, max=1) and its child elements optional (min=0, max=unbounded). This keeps your model backwards compatible.
It is wise to have standards for the CDM and one person (or a group in a large project) who is responsible for maintaining the CDM model. In the XSD you can see that in this CDM example all the data elements are optional. You could argue there should at least be a street or a city. But what if there is a system that deals with addresses being created, so between the screens there is only half the data of an address present? Or a system that uses only a part or maybe even one data item of an address?

The first benefit of using a CDM: Less translations

Now why would you introduce another extra data model, when you already have to deal with existing data models? Can’t we just choose one of them and use it as the central ‘canonical’ data model? Or can’t we just translate data of the existing data models when they connect to each other?

I will start with the last question. When there are only two systems that are connect to each other and there are no future plans to connect them with other systems, that is a good option. It is an overkill to introduc a CDM. But when there are three systems that connect to each other, you already benefit from a CDM. three systems have a maximum of 6 translations: A-B, B-C and C-A (and vice vers). When using an interconnecting CDM, you also have a maximum of 6 translations: A-CDM, B-CDM and C-CDM (and vice versa).
When there are more than three connecting systems, the difference in the number of translations between using a CDM or not increases fast in favor of using a CDM:

Number of translations
# systems without CDM with CDM
3 6 6
4 12 8
5 20 10
6 30 12
7 42 14
8 56 16

Even when not all the systems are connect, the use of a CDM quickly results in less translations.
To give a graphical example of six connecting systems, but not all connecting with each other (it is even quite limited):

Connections without a CDM
Six systems without a CDM

Connections with a CDM
Six systems with a CDM

In this example, you need 16 translations when you do not use a CDM. With a CDM , you need only 12.

The second benefit of using a CDM: Translation maintenance

There is a second reason for using a CDM related to translations. What happens when the data model of a connected system changes? For example when a system is replaced by another system or when a system is updated to a newer version. In the last case, the changes most likely will be minor, but still have to be checked at every connection point, so each translation, of that system.
Let’s use the graphical picture above and assume that system E is replaced by system X.
When no CDM is used, there are four connections, with system A, B, C and D. This means there are 8 translations that have to be changed, two per system: to and from system X. For example when A is calling X, the request is a translation from A to X and the response from X to A. When a CDM is used, only two translations have to be changed: from CDM to X and from X to CDM.

Graphically explained:

Maintenance without a CDM
Maintenance without a CDM

Maintenance with a CDM
Maintenance with a CDM

The third benefit of using a CDM: Logic maintenance

Often the integration software that connects the systems, also has some logic or orchestration (e.g. with BPEL). For example: when a message from system A arrives and it is an order, then the order has to be routed to the ERP and to the financial system. And if the order is for a registered customer, the order has to be routed to the CRM system also. This kind of rules means there is some logic, the integration layer asks the CRM system if the customer of the order is a registered customer and depending on the answer, the order is routed to the CRM system or not. When this logic is using the data model of the connected systems, there is a dependency between the logic and the connecting system. So when one of the connecting systems changes, you need to check all logic to see if it uses (some part) of the data model of the connecting system. And if so, the logic has to be adjusted or rewritten. When a CMD is used, all logic (assume this is done right) is written with the data model of the CDM. Thus there is no dependency and a change of a connecting system does not affect the business logic in the integration layer.
Let’s take the previous pictures as example again and assume there is business logic written in BPEL at three places: business logic related to systems A, D and E, business logic related to systems B and E and business logic related to systems B and F. Now again: What happens when system E is replaced by system X. This means that BPEL1 and BPEL2 have to be adjusted or even rewritten (and tested) whereas with a CDM you do not have to do anything!

Graphically explained:

Logic maintenance without a CDM
Logic maintenance without a CDM

Logic maintenance with a CDM
Logic maintenance with a CDM

Existing Data model as CDM?

At the start of this blogpost, I raised the question whether an existing data model of a connecting system can be used as the CDM. In theory this is possible. Mostly there will be one large central system, most likely the ERP, that covers all or almost all kind of data. It may be tempting to use that model as the CDM. But what if somewhere in future the ERP is replaced by a new version. Even minor differences can cause problems. You might be tempted to take the old data model as the CDM and make translations from the new model to the CDM, the old data model. When using XML and the new and the old one have different namespaces, this is even possible. But still, you are bound to some old data model of an outdated system. Mostly that is not what you want. It might even cause problems with licenses, especially in case the system from which the data model it taken as CDM, is replaced by a system of another vendor.
Another disadvantage is that it could be confusing for developers of the system, especially future developer who are confronted with multiple data models of which two are quite similar. Mistakes are easily made. And what if a new system is connected and new data elements have to be added to the model. How flexible is it? Can it easily be changed and extended with backwards compatibility? That is why I advise to create your own CDM!

Conclusion

It is quite clear that using a Canonical Data Model in an integration layer or SOA environment soon pays off. You can summarize this into decoupling the external systems (by their data models) from the integration layer or SOA environment, so in fact decouple them from each other!
How do you do this? How do you setup a CDM which is flexible, so it can be changed and extended easily while being backward compatible? And the data model still should fit into interface descriptions of systems (wsdl) without getting too big, so it becomes, functionally seen, meaningless. This means it must be able to be tailored, so the interface (wsdl) reflects its functionality.
Another topic is standards and best practices about data, or specific XML, usage. Which standards are useful and why? When using XML, should you use a predefined XML ‘flavor’ like “Russian Doll”, “Venetian Blind”, “Salami Slice” or “Garden of Eden”? How about run time dependencies? Should you use a central run time CDM with versioning or only use a central design time CDM which does not exist at run time, but only acts as copy-paste reference for development?  In my next blogpost I will share my experiences about these questions and give valuable advises which prevents problems we have run into.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Benefits of a Canonical Data Model (CDM) in a SOA environment appeared first on AMIS Oracle and Java Blog.

Viewing all 163 articles
Browse latest View live