Quantcast
Channel: Languages – AMIS Technology Blog | Oracle – Microsoft Azure
Viewing all 163 articles
Browse latest View live

Sequential Asynchronous calls in Node.JS – using callbacks, async and ES6 Promises

$
0
0

One of the challenges with programming in JavaScript (ECMA Script) in general and Node.JS in particular is having to deal with asynchronous operations. Whenever a call is made to a function that will handle the request asynchronously, care has to be taken to be prepared to receive the result from the function in an asynchronous fashion. Additionally, we have to ensure that the program flow does not continue prematurely – only those steps that can be performed without the result from the function call can proceed. Orchestrating multiple asynchronous – some of them sequential or chained and others possibly in parallel – and gathering the results from those calls in the proper way is not trivial.

Traditionally, we used callback functions to program the asynchronous interaction: the caller passed a reference to a function to the asynchronous operation and when done with the asynchronous operation, the called function would invoke this callback function to hand it the outcome. The call(ed)back function would then take over and continue flow of the program. A simple example of a callback function is seen whenever an action is scheduled for execution using setTimeout():

setTimeout(function () {
  console.log("Now I am doing my thing ");
}, 1000);

or perhaps more explicitly:

function cb() {
  console.log("Now I am doing my thing ");
}

setTimeout(cb, 1000);

Chain of Asynchronous Actions

With multiple mutually dependent (chained) calls, using callback functions results in nested program logic that quickly becomes hard to read, debug and maintain. An example is shown here:

image

 

Function readElementFromJsonFile does what its name says: it reads the value of a specific element from the file specified in the input parameter. It does so asynchronously and it will call the callback function to return the result when it has been obtained. Using this function, we are after the final value. Starting with file step1.json, we read the name of the nextfile element which indicates the next file to read, in this case step2.json. This file in turn indicates that nextStep.json should be inspected and so on. Clearly we have a case of a chain of asynchronous actions where each action’s output provides the input for the next action.

In classic callback oriented JavaScript, the code for the chain of calls looks like this – the nested structure we have come to expect from using callback functions to handle asynchronous situations:

// the classic approach with nested callbacks
var fs = require('fs');
var step1 = "/step1.json";

function readElementFromJsonFile(fileName, elementToRead, cb) {
    var elementToRetrieve = 'nextfile';
    if (elementToRead) {
        elementToRetrieve = elementToRead;
    }
    console.log('file to read from ' + fileName);
    fs.readFile(__dirname + '/' + fileName, "utf8", function (err, data) {
        var element = "";
        if (err) return cb(err);
        try {
            element = JSON.parse(data)[elementToRetrieve];
        } catch (e) {
            return cb(e);
        }
        console.log('value of element read = ' + element);
        cb(null, element);
    });
}//readElementFromJsonFile

readElementFromJsonFile(step1, null, function (err, data) {
    if (err) return err;
    readElementFromJsonFile(data, null, function (err, data) {
        if (err) return err;
        readElementFromJsonFile(data, null, function (err, data) {
            if (err) return err;
            readElementFromJsonFile(data, null, function (err, data) {
                if (err) return err;
                readElementFromJsonFile(data, 'actualValue', function (err, data) {
                    if (err) return err;
                    console.log("Final value = " + data);
                });
            });
        });
    });
});

The arrival of the Promise in ES6 – a native language mechanism that is therefore available in recent versions of Node.JS – makes things a little bit different and more organized, readable and maintainable. The function readElementFromJsonFile() will now return a Promise – a placeholder for the eventual result of the asynchronous operation. Even though the result will be provided through the Promise object at a later moment, we can program as if the Promise represents that result right now – and we can anticipate in our code at what to do when the function delivers on its Promise (by calling the built in function resolve inside the Promise).

The result of the resolution of a Promise is a value – in the case of function readElementFromJsonFile it is the value read from the file. The then() operation that is executed when the Promise is resolved with that value, calls the function that it was given as a parameter. The result (resolution outcome) of the Promise is passes as input into this function. In the code sample below we see how readElementFromJsonFile(parameters).then(readElementFromJsonFile) is used. This means: when the Promise returned from the first call to the function is resolved, then call the function again, this time using the outcome of the first call as input to the second call. With the fourth then this is a little bit more explicit: since in the final call to the function readElementFromJsonFile we need to pass not just the outcome from the previous call to the function as an input parameter but also the name of the element to read from the file. Therefore we use an anonymous function that takes the resolution result as input and makes the call to the function with the additional parameter. Something similar happens with the final then – where the result from the previous call is simply printed to the output.

The code for our example of subsequently and asynchronously reading the files becomes:

var fs = require('fs');
var step1 = "step1.json";

function readElementFromJsonFile(fileName, elementToRead) {
    return new Promise((resolve, reject) => {
        var elementToRetrieve = 'nextfile';
        if (elementToRead) {
            elementToRetrieve = elementToRead;
        }
        console.log('file to read from ' + fileName);
        fs.readFile(__dirname + '/' + fileName, "utf8", function (err, data) {
            var element = "";
            if (err) return reject(err);
            try {
                element = JSON.parse(data)[elementToRetrieve];
            } catch (e) {
                reject(e);
            }
            console.log('element read = ' + element);
            resolve(element);
        });
    })// promise
}

readElementFromJsonFile(step1)
    .then(readElementFromJsonFile)
    .then(readElementFromJsonFile)
    .then(readElementFromJsonFile)
    .then(function (filename) { return readElementFromJsonFile(filename, 'actualValue') })
    .then(function (value) { console.log('Value read after processing five files = ' + value); })

Scheduled Actions as Promise or how to Promisify setTimeout

The setTimeout() built in expects a call back function. It does not currently return a Promise. Something like:

setTimeout(1000).then(myFunc)

would be nice but does not exist.

This entry on Stackoverflow has a nice solution for working with setTimeout Promise style:

function delay(t) {
   return new Promise(function(resolve) {
       setTimeout(resolve, t)
   });
}

function myFunc() {
    console.log('At last I can work my magic!');
}

delay(1000).then(myFunc);

The post Sequential Asynchronous calls in Node.JS – using callbacks, async and ES6 Promises appeared first on AMIS Oracle and Java Blog.


Node.js run from GitHub in Generic Docker Container backed by Dockerized Redis Cache

$
0
0

In a previous article I talked about a generic Docker Container Image that can be used to run any Node.js application directly from GitHub or some other Git instance by feeding the Git repo url as Docker run parameter (see https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/). In this article, I create a simple Node.js application that will be pushed to GitHub and run in that generic Docker container. It will use a Redis cache that is running in a separate Docker Container.

image

The application does something simple: HTTP requests are handled: each request will lead to an increment of the request counter and the current value of the request counter is returned. The earlier implementation of this functionality used a local Node.js variable to keep track of the request count. This approach had two spectacular flaws: horizontal scalability (adding instances of the application fronted by a load balancer of sorts) led to strange results because each instance kept its own request counter. And a restart of the application caused the count to be reset. The incarnation we discuss in this article uses a Redis cache as a shared store for the request counter, one that will also survive the restart of the Node.js application instances. Note: of course this means Redis becomes a single point of failure, unless we cluster Redis too and/or use a persistent file as backup. Both options are available but are out of scope for this article.

Sources for this article can be found on GitHub: https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017/tree/master/part1 .

Run Redis

To run a Docker Container with a Redis cache instance, we only have to execute this statement:

docker run -d –name redis -p 6379:6379 redis

We run a container based on the Docker image called redis. The container is also called redis and its internal port 6379 is exposed and mapped to port 6379 in the host. That it all it takes. The image is pulled and the container is started.

image

Create Node.js Application RequestCounter – Talking to Redis

To talk to Redis from a Node.js application, there are several modules available. The most common and generic one seems to be called redis. To use it, I have to install it with npm:

npm install redis –save

image

To leverage Redis in my application code, I need to require(‘redis’) and create a client connection. For that, I need the host and port for the Redis instance. The port was specified when we started the Docker container for Redis (6379) and the host ip is the ip of the Docker machine (I am running Docker Tools on Windows).

Here is the naïve implementation of the request counter, backed by Redis. Naïve because it does not cater for race conditions between multiple instances that could each read the current counter value from Redis, each increase it and write it back, causing one or multiple counts to be potentially lost. Note that the REDIS_HOST and REDIS_PORT can be specified through environment variables (read with process.env.<name of variable>.

//respond to HTTP requests with response: count of number of requests
// invoke from browser or using curl:  curl http://127.0.0.1:PORT
var http = require('http');
var redis = require("redis");

var redisHost = process.env.REDIS_HOST ||"192.168.99.100" ;
var redisPort = process.env.REDIS_PORT ||6379;

var redisClient = redis.createClient({ "host": redisHost, "port": redisPort });

var PORT = process.env.APP_PORT || 3000;

var redisKeyRequestCounter = "requestCounter";

var server = http.createServer(function handleRequest(req, res) {
    var requestCounter = 0;

    redisClient.get(redisKeyRequestCounter, function (err, reply) {
        if (err) {
            res.write('Request Count (Version 3): ERROR ' + err);
            res.end();
        } else {
            if (!reply || reply == null) {
                console.log("no value found yet");
                redisClient.set(redisKeyRequestCounter, requestCounter);
            } else {
                requestCounter = Number(reply) + 1;
                redisClient.set(redisKeyRequestCounter, requestCounter);
            }
            res.write('Request Count (Version 3): ' + requestCounter);
            res.end();
        }
    })
}).listen(PORT);

    //        redisClient.quit();

console.log('Node.JS Server running on port ' + PORT + ' for version 3 of requestCounter application, powered by Redis.');

 

Run the Node.JS Application talking to Redis

The Node.js application can be run locally – from the command line directly on the Node.js runtime.

Alternatively, I have committed and pushed the application to GitHub. Now I can run it using the generic Docker Container Image lucasjellema/node-app-runner that I prepared in this article: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/ using a single startup command:

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017” -e “APP_PORT=8080” -p 8015:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter-3.js” -e “REDIS_HOST:127.0.0.1” -e “REDIS_PORT:6379”   lucasjellema/node-app-runner

This command passes relevant values as environment variable – such as the GitHub Repo url, the directory in that repo and the exact script to run and also the host and port for Redis as well as the port that the Node.js application should listen at for requests. In the standard Docker way, the internal port (8080) is mapped to the external port (8015).image

 

The application can accessed from the browser:

image

 

Less Naïve Implementation using Redis Watch and Multi for Optimistic Locking

Although the code shown overhead seems to be working – it is not robust. When scaling out –  multiple instances can race against each other and overwrite each other’s changes in Redis because no locking has been implemented. Based on this article: https://blog.yld.io/2016/11/07/node-js-databases-using-redis-for-fun-and-profit/#.WSGEWtwlGpo I have extended the code with an optimistic locking mechanism. Additionally, the treatment of client connections is improved – reducing the chance of leaking connections.

//respond to HTTP requests with response: count of number of requests
// invoke from browser or using curl:  curl http://127.0.0.1:PORT
// use an optmistic locking strategy to prevent race conditions between multiple clients updating the requestCount at the same time
// based on https://blog.yld.io/2016/11/07/node-js-databases-using-redis-for-fun-and-profit/#.WSGEWtwlGpo
var http = require('http');
var Redis = require("redis");

var redisHost = process.env.REDIS_HOST || "192.168.99.100";
var redisPort = process.env.REDIS_PORT || 6379;

var PORT = process.env.APP_PORT || 3000;

var redisKeyRequestCounter = "requestCounter";

var server = http.createServer(function handleRequest(req, res) {
    increment(redisKeyRequestCounter, function (err, newValue) {
        if (err) {
            res.write('Request Count (Version 3): ERROR ' + err);
            res.end();
        } else {
            res.write('Request Count (Version 3): ' + newValue);
            res.end();
        }
    })
}).listen(PORT);


function _increment(key, cb) {
    var replied = false;
    var newValue;

    var redis = Redis.createClient({ "host": redisHost, "port": redisPort });
    // if the key does not yet exist, then create it with a value of zero associated with it
    redis.setnx(key, 0);
    redis.once('error', done);
    // ensure that if anything changes to the key-value pair in Redis (from a different connection), this atomic operation will fail
    redis.watch(key);
    redis.get(key, function (err, value) {
        if (err) {
            return done(err);
        }
        newValue = Number(value) + 1;
        // either watch tells no change has taken place and the set goes through, or this action fails
        redis.multi().
            set(key, newValue).
            exec(done);
    });

    function done(err, result) {
        redis.quit();

        if (!replied) {
            if (!err && !result) {
                err = new Error('Conflict detected');
            }

            replied = true;
            cb(err, newValue);
        }
    }
}

function increment(key, cb) {
    _increment(key, callback);

    function callback(err, result) {
        if (err && err.message == 'Conflict detected') {
            _increment(key, callback);
        }
        else {
            cb(err, result);
        }
    }
}

console.log('Node.JS Server running on port ' + PORT + ' for version 3 of requestCounter application, powered by Redis.');

This Node.js application is run in exactly the same way as the previous one, using requestCounter-4.js as APP_STARTUP rather than requestCounter-3.js.

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017” -e “APP_PORT=8080” -p 8015:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter-4.js” -e “REDIS_HOST:127.0.0.1” -e “REDIS_PORT:6379”   lucasjellema/node-app-runner

image

The post Node.js run from GitHub in Generic Docker Container backed by Dockerized Redis Cache appeared first on AMIS Oracle and Java Blog.

ETL using Oracle DBMS_HS_PASSTHROUGH and SQL Server

$
0
0

While I prefer a “loosely coupled architecture” for replication between Oracle and SQL Server, sometimes a direct (database) link cannot be avoided. By using DBMS_HS_PASSTHROUGH for data extraction the 2 other ETL processes (transformation and load) can be configured and administered with more flexibility, providing an almost acceptable level of “loosely coupled processing“.
Consider this as a really simple ETL config:

    Extract: Select SQL Server data with native sql, using DBMS_PASSTHROUGH and a PIPELINED function.
    Transform: Define a view on top of the function and transform column_names and column datatypes correctly.
    Load: SQL> insert into oracle_table select * from oracle_view;

When you use DBMS_HS_PASSTHROUGH Oracle doesn’t interpret the data you receive from SQL Server. By default this is done by the dg4odbc process, and the performance benefit in bypassing this process is considerable. Also, you’re not restricted by the limitations of dg4odbc and can transform the data into anything you need.

Like dg4odbc DBMS_HS_PASSTHROUGH depends on Heterogeneous Services (a component built-in to Oracle) to provide the connectivity between Oracle and SQL Server. Installation of unixODBC and a freeTDS driver on Linux is required to setup the SQL Server datasource… installation and configuration steps can be found here and here. DBMS_HS_PASSTHROUGH is invoked through an Oracle database link. The package conceptually resides at SQL Server but, in reality, calls to this package are intercepted and mapped to one or more Heterogeneous Services calls. The freeTDS driver, in turn, maps these calls to the API of SQL Server. More about DBMS_HS_PASSTHROUGH here.

Next a short example of how to setup data extraction from SQL Server with DBMS_HS_PASSTHROUGH and data transformation within the definition of a view. In this example the SQL Server column names differ from the ones in Oracle in case, length and/or in name and/or in datatype, and are transformed by the view. NLS_DATE_FORMAT synchronization is an exception… it’s done in the extract package itself. Reason is that all dates in this particular SQL Server database use a specific format, and it doesn’t really obscure the code. But if you choose to refrain from all transformation code in the extract package, you could create types with VARCHAR2’s only, and put all your to_number, to_date and to_timestamp conversion code in the view definition.

Extract

-- create Oracle types for uninterpreted SQL Server data
CREATE OR REPLACE TYPE E01_REC
AS OBJECT(
  C01    NUMBER(8),
  C02    VARCHAR2(25 CHAR),
  C03    VARCHAR2(3 CHAR),
  C04    NUMBER(8),
  C05    DATE,
  C06    DATE );
/

CREATE OR REPLACE TYPE E01_TAB AS TABLE OF E01_REC;
/

-- create the extract package
CREATE OR REPLACE PACKAGE E AUTHID DEFINER AS
---------------------------------------------------------
  FUNCTION E01 RETURN E01_TAB PIPELINED;
---------------------------------------------------------
END E;
/

-- create the extract package body
CREATE OR REPLACE PACKAGE BODY E AS
  v_cursor   BINARY_INTEGER;
  v_out_e01  E01_REC:=E01_REC(NULL,NULL,NULL,NULL,NULL,NULL);
-------------------------------------------------------------------------
  v_stat_e01 VARCHAR2(100):= 'Select SiteID
                                   , SiteName
                                   , SiteMnemonic
                                   , PointRefNumber
                                   , OpeningDate
                                   , ClosingDate
                               From ObjSite';

-------------------------------------------------------------------------
FUNCTION E01
RETURN E01_TAB PIPELINED
  IS
BEGIN
  execute immediate 'alter session set NLS_DATE_FORMAT = ''YYYY-MM-DD HH24:MI:SS'' ';
  v_cursor := DBMS_HS_PASSTHROUGH.OPEN_CURSOR@<DBLINK>;
  DBMS_HS_PASSTHROUGH.PARSE@<DBLINK>(v_cursor,v_stat_e01);
  WHILE DBMS_HS_PASSTHROUGH.FETCH_ROW@<DBLINK>(v_cursor) > 0
    LOOP
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,1,v_out_e01.c01);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,2,v_out_e01.c02);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,3,v_out_e01.c03);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,4,v_out_e01.c04);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,5,v_out_e01.c05);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,6,v_out_e01.c06);
    PIPE ROW(v_out_e01);
    END LOOP;
  DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@<DBLINK>(v_cursor);
  RETURN;
EXCEPTION
  WHEN NO_DATA_NEEDED THEN
    DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@<DBLINK>(v_cursor);
  WHEN OTHERS THEN
    DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@<DBLINK>(v_cursor);
    DBMS_OUTPUT.PUT_LINE(SQLERRM||'--'||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE);
  RAISE;
END E01;
------------------------------------------------------------------------
END E;
/

Transform

CREATE OR REPLACE FORCE VIEW SITE_VW
AS
SELECT TO_NUMBER(C01) SITEID,
       C02            STATIONNAME,
       C03            SITEMNEMONIC,
       TO_NUMBER(C04) STATIONID,
       C05            OPENINGDATE,
       C06            CLOSINGDATE
FROM TABLE(E.E01);

Load

INSERT INTO SITE SELECT * FROM SITE_VW;
COMMIT;

The post ETL using Oracle DBMS_HS_PASSTHROUGH and SQL Server appeared first on AMIS Oracle and Java Blog.

When Screen Scraping became API calling – Gathering Oracle OpenWorld 2017 Session Catalog with Node

$
0
0

A dataset with all sessions of the upcoming Oracle OpenWorld 2017 conference is nice to have – for experiments and demonstrations with many technologies. The session catalog is exposed at a website – https://events.rainfocus.com/catalog/oracle/oow17/catalogoow17 

SNAGHTML59dcd59

With searching, filtering and scrolling, all available sessions can be inspected. If data is available in a browser, it can be retrieved programmatically and persisted locally in for example a JSON document. A typical approach for this is web scraping: having a server side program act like a browser, retrieve the HTML from the web site and query the data from the response. This process is described for example in this article – https://codeburst.io/an-introduction-to-web-scraping-with-node-js-1045b55c63f7 – for Node and the Cheerio library.

However, server side screen scraping of HTML will only be successful when the HTML is static. Dynamic HTML is constructed in the browser by executing JavaScript code that manipulates the browser DOM. If that is the mechanism behind a web site, server side scraping is at the very least considerably more complex (as it requires the server to emulate a modern web browser to a large degree). Selenium has been used in such cases – to provide a server side, programmatically accessible browser engine. Alternatively, screen scraping can also be performed inside the browser itself – as is supported for example by the Getsy library.

As you will find in this article – when server side scraping fails, client side scraping may be a much to complex solution. It is very well possible that the rich client web application is using a REST API that provides the data as a JSON document. An API that our server side program can also easily leverage. That turned out the case for the OOW 2017 website – so instead of complex HTML parsing and server side or even client side scraping, the challenge at hand resolves to nothing more than a little bit of REST calling.

Server Side Scraping

Server side scraping starts with client side inspection of a web site, using the developer tools in your favorite browser.

image

A simple first step with cheerio to get hold of the content of the H1 tag:

image

Now let’s inspect in the web page where we find those session details:

image

We are looking for LI elements with a CSS class of rf-list-item. Extending our little Node program with queries for these elements:

image

The result is disappointing. Apparently the document we have pulled with request-promise does not contain these list items. As I mentioned before, that is not necessarily surprising: these items are added to the DOM at runtime by JavaScript code executed after an Ajax call is used to fetch the session data.

Analyzing the REST API Calls

Using the Developer Tools in the browser, it is not hard to figure out which call was made to fetch these results:

image

The URL is there: https://events.rainfocus.com/api/search. Now the question is: what headers and parameters are sent as part of the request to the API – and what HTTP operation should it be (GET, POST, …)?

The information in the browser tools reveals:

image

A little experimenting with custom calls to the API in Postman made clear that rfWidgetId and rfApiProfileId are required form data.

image

Postman provides an excellent feature to quickly get going with source code in many technologies for making the REST call you have just put together:

image

REST Calling in Node

My first stab:

image

With the sample generated by Postman as a starting point, it is not hard to create the Node application that will iterate through all session types – TUT, BOF, GEN, CON, … -:

image

To limit the size of the individual (requests and) responses, I have decided to search the sessions of each type in 9 blocks – for example CON1, CON2, CON3 etc. The search string is padded with wild cards – so CON1 will return all sessions with an identifier starting with CON1.

To be nice to the OOW 2017 server – and prevent being blocked out by any filters and protections – I will fire requests spaced apart (with a 500 ms delay between each of them).

Because this code is for one time use only, and is not constrained by time limits, I have not put much effort in parallelizing the work, creating the most elegant code in the world etc. It is simply not worth it. This will do the job – once – and that is all I need. (although I want to extend the code to help me download the slide decks for the presentations in an automated fashion; for each conference, it takes me several hours to manually download slide decks to take with me on the plane ride home – only to find out each year that I am too tired to actually browser through those presentations).

The Node code for constructing a local file with all OOW 2017 sessions:

The post When Screen Scraping became API calling – Gathering Oracle OpenWorld 2017 Session Catalog with Node appeared first on AMIS Oracle and Java Blog.

Oracle Mobile Cloud Service (MCS): Overview of integration options

$
0
0

Oracle Mobile Cloud Service has a lot of options which allows it to integrate with other services and systems. Since it runs JavaScript on Node.js for custom APIs, it is very flexible.

Some features allow it to extent its own functionality such as the Firebase configuration option to send notifications to mobile devices, while for example the connectors allow wizard driven integration with other systems. The custom API functionality running on a recent Node.js version ties it all together. In this blog article I’ll provide a quick overview and some background of the integration options of MCS.

MCS is very well documented here and there are many YouTube video’s available explaining/demonstrating various MCS features here. So if you want to know more, I suggest looking at those.

Some recent features

Oracle is working hard on improving and expanding MCS functionality. For the latest improvements to the service see the following page. Some highlights I personally appreciate of the past half year which will also get some attention in this blog:

  • Zero footprint SSO (June 2017)
  • Swagger support in addition to RAML for the REST connector (April 2017)
  • Node.js version v6.10.0 support (April 2017)
  • Support for Firebase (FCM) to replace GCM (December 2016)
  • Support for third party tokens (December 2016)

Feature integration

Notification support

In general there are two options for sending notifications from MCS. Integrating with FCM and integrating with Syniverse. Since they are third party suppliers, you should compare these options (license, support, performance, cost, etc) before choosing one of them.

You can also use any other notification provider if it offers a REST interface by using the REST connector. You will not get much help in configuring it through the MCS interface though; it will be a custom implementation.

Firebase Cloud Messaging / Google Cloud Messaging

Notification support is implemented by integrating with Google cloud messaging products. Google Cloud Messaging (GCM) is being replaced with Firebase Cloud Messaging (FCM) in MCS. GCM has been deprecated by Google for quite a while now so this is a good move. You do need a Google Cloud Account though and have to purchase their services in order to use this functionality. See for example here on how to implement this from a JET hybrid application.

Syniverse

Read more on how to implement this here. You first have to create a Syniverse account. Next subscribe to the Syniverse Messaging Service, register the app and get credentials. These credentials you can register in MCS, client management.

 

Beacon support

Beacons create packages which can be detected on Bluetooth by mobile devices. The package structure the beacons broadcast, can differ. There are samples available for iBeacon, altBeacon and Eddystone but others can be added if you know the corresponding package structure. See the following presentation some background on beacons and how they can be integrated in MCS. How to implement this for an Android app can be watched here.

 

Client support

MCS comes with several SDKs which provide easy integration of a client with MCS APIs. Available client SDKs are iOS, Android, Windows, Web (plain JavaScript). These SDKs provide an easy alternative to using the raw MCS REST APIs. They provide a wrapper for the APIs and provide easy access in the respective language the client uses.

Authentication options (incoming)

SAML, JWT

Third party token support for SAML and JWT is available. Read more here. A token exchange is available as part of MCS which creates MCS tokens from third party tokens based on specifically defined mappings. This MCS tokens can be used by clients in subsequent requests. This does require some work on the client side but the SDKs of course help with this.

Facebook Login

Read here for an example on how to implement this in a hybrid JET application.

OAuth2 and Basic authentication support.

No third party OAuth tokens are supported. This is not strange since the OAuth token does not contain user data and MCS needs a way to validate the token. MCS provides its own OAuth2 STS (Secure Token Service) to create tokens for MCS users. Read more here.

Oracle Enterprise Single Sign-on support.

Read here. This is not to be confused with the Oracle Enterprise Single Sign-on Suite (ESSO). This is browser based authentication of Oracle Cloud users which are allowed access to MCS.

These provide the most common web authentication methods. Especially the third party SAML and JWT support provides for many integration options with third party authentication providers. OKTA is given as an example in the documentation.

Application integration: connectors

MCS provides connectors which allow wizard driven configuration in MCS. Connectors are used for outgoing calls. There is a connector API available which makes it easy to interface with the connectors from custom JavaScript code. The connectors support the use of Oracle Credential Store Framework (CSF) keys and certificates. TLS versions to TLS 1.2 are supported. You are of course warned that older versions might not be secure. The requests the connectors do are over HTTP since no other technologies are currently directly supported. You can of course use REST APIs and ICS as wrappers should you need it.

Connector security settings

For the different connectors, several Oracle Web Service Security Manager (OWSM) policies are used. See here. These allow you to configure several security settings and for example allow usage of WS Security and SAML tokens for outgoing connections. The policies can be configured with security policy properties. See here.

REST

It is recommended to use the REST connector instead of doing calls directly from your custom API code because of they integrate well with MCS and provide security and monitoring benefits. For example out of the box analytics.

SOAP

The SOAP connector can do a transformation from SOAP to JSON and back to make working with the XML easier in JavaScript code. This has some limitations however:

Connector scope

There are also some general limitations defined by the scope of the API of the connector:

  • Only SOAP version 1.1 and WSDL version 1.2 are supported.
  • Only the WS-Security standard is supported. Other WS-* standards, such as WS-RM or WS-AT, aren’t supported.
  • Only document style and literal encoding are supported.
  • Attachments aren’t supported.
  • Of the possible combinations of input and output message operations, only input-output operations and input-only operations are supported. These operations are described in the Web Services Description Language (WSDL) Version 1.2 specification.

Transformation limitations

  • The transformation from SOAP to XML has limitations
  • A choice group with child elements belonging to different namespaces having the same (local) name. This is because JSON doesn’t have any namespace information.
  • A sequence group with child elements having duplicate local names. For example, <Parent><ChildA/><ChildB/>…<ChildA/>…</Parent>. This translates to an object with duplicate property names, which isn’t valid.
  • XML Schema Instance (xsi) attributes aren’t supported.

Integration Cloud Service connector

Read more about this connector here. This connector allows you to call ICS integrations. You can connect to your ICS instance and select an integration from a drop-down menu. For people who also use ICS in their cloud architecture, this will probably be the most common used connector.

Fusion Applications connector

Read more about this connector here. The flow looks similar to that of the ICS Cloud Adapters (here). In short, you authenticate, a resource discovery is done and local artifacts are generated which contain the connector configuration. At runtime this configuration is used to access the service. The wizard driven configuration of the connector is a great strength. MCS does not provide the full range of cloud adapters as is available in ICS and SOA CS.

Finally

Flexibility

Oracle Mobile Cloud Service allows you to define custom APIs using JavaScript code. Oracle Mobile Cloud Service V17.2.5-201705101347 runs Node.js version v6.10.0 and OpenSSL version 1.0.2k (process.versions) which are quite new! Because a new OpenSSL version is supported, TLS 1.2 ciphers are also supported and can be used to create connections to other systems. This can be done from custom API code or by configuring the OWSM settings in the connector configuration. It runs on Oracle Enterprise Linux 6 kernel 2.6.39-400.109.6.el6uek.x86_64 (JavaScript: os.release()). Most JavaScript packages will run on this version so few limitations there.

ICS also provides an option to define custom JavaScript functions (see here). I haven’t looked at the engine used in ICS though but I doubt this will be a full blown Node.js instance and suspect (please correct me if I’m wrong) a JVM JavaScript engine is used like in SOA Suite / SOA CS. This provides less functionality and performance compared to Node.js instances.

What is missing?

Integration with other Oracle Cloud services

Mobile Cloud Service does lack out of the box integration options with other Oracle Cloud Services. Only 4 HTTP based connectors are available. Thus if you want to integrate with an Oracle Cloud database (a different one than which is provided) you have to use the external DB’s REST API (with the REST connector or from custom API code) or use for example the Integration Cloud Service connector or the Application Container Cloud Service to wrap the database functionality. This of course requires a license for the respective services.

Cloud adapters

A Fusion Applications Connector is present in MCS. Also OWSM policies are used in MCS. It would therefore not be strange if MCS would be technically capable of running more of the Cloud adapters which are present in ICS. This would greatly increase the integration options for MCS.

Mapping options for complex payloads

Related to the above, if the payloads become large and complex, mapping fields also becomes more of a challenge. ICS does a better job at this than MCS currently. It has a better mapping interface and provides mapping suggestions.

The post Oracle Mobile Cloud Service (MCS): Overview of integration options appeared first on AMIS Oracle and Java Blog.

R and the Oracle database: Using dplyr / dbplyr with ROracle on Windows 10

$
0
0

R uses data extensively. Data often resides in a database. In this blog I will describe installing and using dplyr, dbplyr and ROracle on Windows 10 to access data from an Oracle database and use it in R.

Accessing the Oracle database from R

dplyr makes the most common data manipulation tasks in R easier. dplyr can use dbplyr. dbplyr provides a transformation from the dplyr verbs to SQL queries. dbplyr 1.1.0 is released 2017-06-27. See here. It uses the DBI (R Database Interface). This interface is implemented by various drivers such as ROracle. ROracle is an Oracle driver based on OCI (Oracle Call Interface) which is a high performance native C interface to connect to the Oracle Database.

Installing ROracle on Windows 10

I encountered several errors when installing ROracle in Windows 10 on R 3.3.3. The steps to take to do this right in one go are the following:

  • Determine your R platform architecture. 32 bit or 64 bit. For me this was 64 bit
  • Download and install the oracle instant client with the corresponding architecture (here). Download the basic and SDK files. Put the sdk file from the sdk zip in a subdirectory of the extracted basic zip (at the same level as vc14)
  • Download and install RTools (here)
  • Set the OCI_LIB64 or OCI_LIB32 variables to the instant client path
  • Set the PATH variable to include the location of oci.dll
  • Install ROracle (install.packages(“ROracle”) in R)

Encountered errors


Warning in install.packages :
 package ‘ROracle_1.3-1.zip’ is not available (for R version 3.3.3)

You probably tried to install the ROracle package which Oracle provides on an R version which is too new (see here). This will not work on R 3.3.3. You can compile ROracle on your own or use the (older) R version Oracle supports.


Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’ These will not be installed

This can be fixed by installing RTools (here). This will install all the tools required to compile sources on a Windows machine.

Next you will get the following question:


Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’
Do you want to attempt to install these from sources?
y/n:

If you say y, you will get the following error:


installing the source package ‘ROracle’

trying URL 'https://cran.rstudio.com/src/contrib/ROracle_1.3-1.tar.gz'
Content type 'application/x-gzip' length 308252 bytes (301 KB)
downloaded 301 KB

* installing *source* package 'ROracle' ...
** package 'ROracle' successfully unpacked and MD5 sums checked
ERROR: cannot find Oracle Client.
 Please set OCI_LIB64 to specify its location.

In order to fix this, you can download and install the Oracle Instant Client (the basic and SDK downloads).

Mind that when running a 64 bit version of R, you also need a 64 bit version of the instant client. You can check with the R version command. In my case: Platform: x86_64-w64-mingw32/x64 (64-bit). Next you have to set the OCI_LIB64 variable (for 64 bit else OCI_LIB32) to the specified path. After that you will get the error as specified below:

Next it will fail with something like:


Error in inDL(x, as.logical(local), as.logical(now), ...) :
 unable to load shared object 'ROracle.dll':
 LoadLibrary failure: The specified module could not be found.

This is caused when oci.dll from the instant client is not in the path environment variable. Add it and it will work! (at least it did on my machine). The INSTALL file from the ROracle package contains a lot of information about different errors which can occur during installation. If you encounter any other errors, be sure to check it.

How a successful 64 bit compilation looks

> install.packages("ROracle")
Installing package into ‘C:/Users/maart_000/Documents/R/win-library/3.3’
(as ‘lib’ is unspecified)
Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’
Do you want to attempt to install these from sources?
y/n: y
installing the source package ‘ROracle’

trying URL 'https://cran.rstudio.com/src/contrib/ROracle_1.3-1.tar.gz'
Content type 'application/x-gzip' length 308252 bytes (301 KB)
downloaded 301 KB

* installing *source* package 'ROracle' ...
** package 'ROracle' successfully unpacked and MD5 sums checked
Oracle Client Shared Library 64-bit - 12.2.0.1.0 Operating in Instant Client mode.
found Instant Client C:\Users\maart_000\Desktop\instantclient_12_2
found Instant Client SDK C:\Users\maart_000\Desktop\instantclient_12_2/sdk/include
copying from C:\Users\maart_000\Desktop\instantclient_12_2/sdk/include
** libs
Warning: this package has a non-empty 'configure.win' file,
so building only the main architecture

c:/Rtools/mingw_64/bin/gcc  -I"C:/PROGRA~1/R/R-33~1.3/include" -DNDEBUG -I./oci    -I"d:/Compiler/gcc-4.9.3/local330/include"     -O2 -Wall  -std=gnu99 -mtune=core2 -c rodbi.c -o rodbi.o
c:/Rtools/mingw_64/bin/gcc  -I"C:/PROGRA~1/R/R-33~1.3/include" -DNDEBUG -I./oci    -I"d:/Compiler/gcc-4.9.3/local330/include"     -O2 -Wall  -std=gnu99 -mtune=core2 -c rooci.c -o rooci.o
c:/Rtools/mingw_64/bin/gcc -shared -s -static-libgcc -o ROracle.dll tmp.def rodbi.o rooci.o C:\Users\maart_000\Desktop\instantclient_12_2/oci.dll -Ld:/Compiler/gcc-4.9.3/local330/lib/x64 -Ld:/Compiler/gcc-4.9.3/local330/lib -LC:/PROGRA~1/R/R-33~1.3/bin/x64 -lR
installing to C:/Users/maart_000/Documents/R/win-library/3.3/ROracle/libs/x64
** R
** inst
** preparing package for lazy loading
** help
*** installing help indices
** building package indices
** testing if installed package can be loaded
* DONE (ROracle)

Testing ROracle

You can read the ROracle documentation here. Oracle has been so kind as to provide developer VM’s to play around with the database. You can download them here. I used ‘Database App Development VM’.

After installation of ROracle you can connect to the database and for example fetch employees from the EMP table. See for example below (make sure you also have DBI installed).

library("DBI")
library("ROracle")
drv <- dbDriver("Oracle")
host <- "localhost"
port <- "1521"
sid <- "orcl12c"
connect.string <- paste(
"(DESCRIPTION=",
"(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))",
"(CONNECT_DATA=(SID=", sid, ")))", sep = "")

con <- dbConnect(drv, username = "system", password = "oracle", dbname = connect.string, prefetch = FALSE,
bulk_read = 1000L, stmt_cache = 0L, external_credentials = FALSE,
sysdba = FALSE)

dbReadTable(con, "EMP")

This will yield the data in the EMP table.

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
1 7698 BLAKE MANAGER 7839 1981-05-01 00:00:00 2850 NA 30
2 7566 JONES MANAGER 7839 1981-04-02 00:00:00 2975 NA 20
3 7788 SCOTT ANALYST 7566 1987-04-19 00:00:00 3000 NA 20
4 7902 FORD ANALYST 7566 1981-12-02 23:00:00 3000 NA 20
5 7369 SMITH CLERK 7902 1980-12-16 23:00:00 800 NA 20
6 7499 ALLEN SALESMAN 7698 1981-02-19 23:00:00 1600 300 30
7 7521 WARD SALESMAN 7698 1981-02-21 23:00:00 1250 500 30
8 7654 MARTIN SALESMAN 7698 1981-09-27 23:00:00 1250 1400 30
9 7844 TURNER SALESMAN 7698 1981-09-08 00:00:00 1500 0 30
10 7876 ADAMS CLERK 7788 1987-05-23 00:00:00 1100 NA 20
11 7900 JAMES CLERK 7698 1981-12-02 23:00:00 950 NA 30

Using dplyr

dplyr uses dbplyr and it makes working with database data a lot easier. You can see an example here.

Installing dplyr and dbplyr in R is easy:

install.packages("dplyr")
install.packages("dbplyr")

Various functions are provides to work with data.frames, a popular R datatype in combination with data from the database. Also dplyr uses an abstraction above SQL which makes coding SQL for non-SQL coders more easy. You can compare it in some ways with Hibernate which makes working with databases from the Java object world more easy.

Some functions dplyr provides:

  • filter() to select cases based on their values.
  • arrange() to reorder the cases.
  • select() and rename() to select variables based on their names.
  • mutate() and transmute() to add new variables that are functions of existing variables.
  • summarise() to condense multiple values to a single value.
  • sample_n() and sample_frac() to take random samples.

I’ll use the same example data as with the above sample which uses plain ROracle

library("DBI")
library("ROracle")
library("dplyr")

#below are required to make the translation done by dbplyr to SQL produce working Oracle SQL
sql_translate_env.OraConnection <- dbplyr:::sql_translate_env.Oracle
sql_select.OraConnection <- dbplyr:::sql_select.Oracle
sql_subquery.OraConnection <- dbplyr:::sql_subquery.Oracle

drv <- dbDriver("Oracle")
host <- "localhost"
port <- "1521"
sid <- "orcl12c"
connect.string <- paste(
"(DESCRIPTION=",
"(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))",
"(CONNECT_DATA=(SID=", sid, ")))", sep = "")

con <- dbConnect(drv, username = "system", password = "oracle", dbname = connect.string, prefetch = FALSE,
bulk_read = 1000L, stmt_cache = 0L, external_credentials = FALSE,
sysdba = FALSE)

emp_db <- tbl(con, "EMP")
emp_db

The output is something like:

# Source: table<EMP> [?? x 8]
# Database: OraConnection
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
<int> <chr> <chr> <int> <dttm> <dbl> <dbl> <int>
1 7839 KING PRESIDENT NA 1981-11-16 23:00:00 5000 NA 10
2 7698 BLAKE MANAGER 7839 1981-05-01 00:00:00 2850 NA 30
3 7782 CLARK MANAGER 7839 1981-06-09 00:00:00 2450 NA 10
4 7566 JONES MANAGER 7839 1981-04-02 00:00:00 2975 NA 20
5 7788 SCOTT ANALYST 7566 1987-04-19 00:00:00 3000 NA 20
6 7902 FORD ANALYST 7566 1981-12-02 23:00:00 3000 NA 20
7 7369 SMITH CLERK 7902 1980-12-16 23:00:00 800 NA 20
8 7499 ALLEN SALESMAN 7698 1981-02-19 23:00:00 1600 300 30
9 7521 WARD SALESMAN 7698 1981-02-21 23:00:00 1250 500 30
10 7654 MARTIN SALESMAN 7698 1981-09-27 23:00:00 1250 1400 30
# ... with more rows

If I now want to select specific records, I can do something like:

emp_db %>% filter(DEPTNO == "10")

Which will yield

# Source: lazy query [?? x 8]
# Database: OraConnection
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
<int> <chr> <chr> <int> <dttm> <dbl> <dbl> <int>
1 7839 KING PRESIDENT NA 1981-11-16 23:00:00 5000 NA 10
2 7782 CLARK MANAGER 7839 1981-06-09 00:00:00 2450 NA 10
3 7934 MILLER CLERK 7782 1982-01-22 23:00:00 1300 NA 10

A slightly more complex query:

emp_db %>%
group_by(DEPTNO) %>%
summarise(EMPLOYEES = count())

Will result in the number of employees per department:

# Source: lazy query [?? x 2]
# Database: OraConnection
DEPTNO EMPLOYEES
<int> <dbl>
1 30 6
2 20 5
3 10 3

You can see the generated query by:

emp_db %>%
group_by(DEPTNO) %>%
summarise(EMPLOYEES = count()) %>% show_query()

Will result in

<SQL>
SELECT "DEPTNO", COUNT(*) AS "EMPLOYEES"
FROM ("EMP")
GROUP BY "DEPTNO"

If I want to take a random sample from the dataset to perform analyses on, I can do:

sample_n(as_data_frame(emp_db), 10)

Which could result in something like:

# A tibble: 10 x 8
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
<int> <chr> <chr> <int> <dttm> <dbl> <dbl> <int>
1 7844 TURNER SALESMAN 7698 1981-09-08 00:00:00 1500 0 30
2 7499 ALLEN SALESMAN 7698 1981-02-19 23:00:00 1600 300 30
3 7566 JONES MANAGER 7839 1981-04-02 00:00:00 2975 NA 20
4 7654 MARTIN SALESMAN 7698 1981-09-27 23:00:00 1250 1400 30
5 7369 SMITH CLERK 7902 1980-12-16 23:00:00 800 NA 20
6 7902 FORD ANALYST 7566 1981-12-02 23:00:00 3000 NA 20
7 7698 BLAKE MANAGER 7839 1981-05-01 00:00:00 2850 NA 30
8 7876 ADAMS CLERK 7788 1987-05-23 00:00:00 1100 NA 20
9 7934 MILLER CLERK 7782 1982-01-22 23:00:00 1300 NA 10
10 7782 CLARK MANAGER 7839 1981-06-09 00:00:00 2450 NA 10

Executing the same command again will result in a different sample.

Finally

There are multiple ways to get data to and from the Oracle database and perform actions on them. Oracle provides Oracle R Enterprise. Oracle R Enterprise is a component of the Oracle Advanced Analytics Option of Oracle Database Enterprise Edition. You can create R proxy objects in your R session from database-resident data. This allows you to work on database data in R while the database does most of the computations. Another feature of Oracle R Enterprise is an R script repository in the database and there is also a feature to allow execution of R scripts from within the database (embedded), even within SQL statements. As you can imagine this is quite powerful. More on this in a later blog!

The post R and the Oracle database: Using dplyr / dbplyr with ROracle on Windows 10 appeared first on AMIS Oracle and Java Blog.

Tweet with download link for JavaOne and Oracle OpenWorld slide decks

$
0
0

In a recent article I discussed how to programmatically fetch a JSON document with information about sessions at Oracle OpenWorld and JavaOne 2017. Yesterday, slidedecks for these sessions started to become available. I have analyzed how the link to these downloads were included in the JSON data returned by the API. Then I created simple Node programs to tweet about each of the sessions for which the download became available

image

and to download the file to my local file system.

image

I added provisions to space out the tweets and the download activity over time – as to not burden the backend of the web site and to not be kicked off Twitter for being a robot.

The code I crafted is not particularly ingenuous – it was created rather hastily in order to share with the OOW17 and JavaOne communities the links for downloading slide decks from presentations at both conferences. I used npm modules twit and download. This code can be found on GitHub: https://github.com/lucasjellema/scrape-oow17.

The documents javaone2017-sessions-catalog.json and oow2017-sessions-catalog.json contain details on all sessions – including the URLs for downloading slides.

image

The post Tweet with download link for JavaOne and Oracle OpenWorld slide decks appeared first on AMIS Oracle and Java Blog.

Quick and clean start with Java 9–running Docker container in VirtualBox VM on Windows 10 courtesy of Vagrant

$
0
0

The messages from JavaOne 2017 were loud and clear. Some of these:

  • Java 9 is here,
  • the OpenJDK has all previously exclusive commercial features from the Oracle (fka SUN) JDK – this includes the Java Flight Recorder for real time monitoring/metrics gathering and analysis,
  • Java 9 will be succeeded by Java 18.3, 18.9 and so on (a six month cadence) with much quicker evolution with continued quality and stability
  • JigSaw is finally here; it powers the coming evolution of Java and the platform and it allows us to create fine tuned, tailor more Java runtime environments that may take less than 10-20% of the full blown JRE
  • Java 9 has many cool and valuable features besides the Modularity of JigSaw – features that make programming easier, more elegant more fun more lightweight etc.
  • One of the objectives is “Java First, Java Always” (instead of: when web companies mature, then they switch to Java) (having Java enabled for cloud, microsevice and serverless is an important step in this)

    Note: during the JavaOne Keynote, Spotify presented a great example of this pattern: they have a microservices architecture (from before it was called microservice); most were originally created in Python, with the exception of the search capability; due to scalability challenges, all Python based microservices have been migrated to Java over the years. The original search service is still around. Java not only scales very well and has the largest pool of developers to draw from, it also provides great run time insight into what is going on in the JVM

I have played around a little with Java 9 but now that is out in the open (and I have started working on a fresh new laptop – Windows 10) I thought I should give it another try. In this article I will describe the steps I took from a non Java enabled Windows environment to playing with Java 9 in jshell – in an isolated container, created and started without any programming, installation or configuration. I used Vagrant and VirtualBox – both were installed on my laptop prior to the exercise described in this article. Vagrant in turn used Docker and downloaded the OpenJDK Docker image for Java 9 on top of Alpine Linux. All of that was hidden from view.

The steps:

0. Preparation – install VirtualBox and Vagrant

1. Create Vagrant file – configured to provide a VirtualBox image (based on Ubuntu Linux) and provision the Docker host on that image as well as a Docker Container with OpenJDK 9

2. Run Vagrant for that Vagrant file to have it spin up the VirtualBox, install Docker into it, pull the OpenJDK image and run the container

3. Connect into VirtualBox Docker Host and Docker Container

4. Run jshell command line and try out some Java 9 statements

In more detail:

1. Create Vagrant file

In a new directory, create a file called Vagrantfile – no extension. The file has the following content:

It is configured to provide a VirtualBox image (based on Ubuntu Linux) and provision the Docker host on that VB image as well as a Docker Container based on the OpenJDK:9 image.

image

Vagrant.configure("2") do |config|

config.vm.provision "docker" do |d|
    d.run "j9",
      image: "openjdk:9",
      cmd: "/bin/sh",
      args: "-v '/vagrant:/var/www'"
    d.remains_running = true
  end

# The following line terminates all ssh connections. Therefore Vagrant will be forced to reconnect.
# That's a workaround to have the docker command in the PATH
# Command: "docker" "ps" "-a" "-q" "--no-trunc"
# without it, I run into this error:
# Stderr: Get http:///var/run/docker.sock/v1.19/containers/json?all=1: dial unix /var/run/docker.sock: permission denied.
# Are you trying to connect to a TLS-enabled daemon without TLS?

config.vm.provision "shell", inline:
"ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill"

config.vm.define "dockerhostvm"
config.vm.box = "ubuntu/trusty64"
config.vm.network "private_network", ip: "192.168.188.102"

config.vm.provider :virtualbox do |vb|
  vb.name = "dockerhostvm"
  vb.memory = 4096
  vb.cpus = 2
  vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
  vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
end

end

# to get into running container:
# vagrant ssh
# docker run -it  -v /vagrant:/var/www openjdk:9 /bin/sh

2. Run Vagrant for that Vagrant file

And have it spin up the VirtualBox, install Docker into it, pull the OpenJDK image and run the container:

image

3. Connect into VirtualBox Docker Host and Docker Container

Using

vagrant ssh

to connect into the VirtualBox Ubuntu Host and

docker run –it openjdk:9 /bin/sh

to run a container and connect into the shell command line, we get to the environment primed for running Java 9:

image

At this point, I should also be able to use docker exec to get into the container that started by the Vagrant Docker provisioning configuration. However, I had some unresolved issues with that – the container kept restarting. I will attempt to resolve that issue.

4. Run jshell command line and try out some Java 9 statements

JShell is the new Java command line tool that allows REPL style exploration – somewhat similar to for example Python and JavaScript (and even SQL*Plus).

Here is an example of some JShell interaction:

image

I tried to use the new simple syntax for creating collections from static data. Here I got the syntax right:

image

It took me a little time to find out the exit strategy. Turns out that /exit does that trick:

image

In summary: spinning up a clean, isolated environment in which to try out Java is not hard at all. On Linux – with Docker running natively – it is even simpler, although even then using Vagrant may be beneficial. On Windows it is also quite straightforward – no complex sys admin stuff required and hardly any command line things either. And that is something we developers should start to master – if we do not do so already.

Issue with Docker Provider in Vagrant

Note: I did not succeed in using the Docker provider (instead of the provisioner) with Vagrant. Attempting that (cleaner) approach failed with “Bringing machine ‘j9’ up with ‘docker’ provider…
The executable ‘docker’ Vagrant is trying to run was not
found in the %PATH% variable. This is an error. Please verify
this software is installed and on the path.” I have looked across the internet, found similar reports but did not find a solutio that worked for me.

image

The provider is documented here: https://www.vagrantup.com/docs/docker/

The Vagrantfile I tried to use originally – but was unable to get to work:

image

(based on my own previous article: https://technology.amis.nl/2015/08/22/first-steps-with-provisioning-of-docker-containers-using-vagrant-as-provider/)

The post Quick and clean start with Java 9–running Docker container in VirtualBox VM on Windows 10 courtesy of Vagrant appeared first on AMIS Oracle and Java Blog.


Java 9 – First baby steps with Modules and jlink

$
0
0

In a recent article, I created an isolated Docker Container as Java 9 R&D environment: https://technology.amis.nl/2017/10/11/quick-and-clean-start-with-java-9-running-docker-container-in-virtualbox-vm-on-windows-10-courtesy-of-vagrant/. In this article, I will use that environment to take few small steps with Java 9 – in particular with modules. Note:this story does not end well. I wanted to conclude with using jlink to create a stand alone runtime that contained both the required JDK modules and my own module – and demonstrate how small that runtime was. Unfortunately, the Link step failed for me. More news on that in a later article.

Create Custom Module

Start a container based on the openjdk:9 image, exposing its port 80 on the docker host machine and mapping folder /vagrant (mapped from my Windows host to the Docker Host VirtualBox Ubuntu image) to /var/www inside the container:

docker run -it -p 127.0.0.1:8080:80 -v /vagrant:/var/www openjdk:9 /bin/sh

Create Java application with custom module:  I create a single Module (nl.amis.j9demo) and a single class nl.amis.j9demo.MyDemo. The module depends directly on one JDK module (httpserver) and indirectly on several more.

imageThe root directory for the module has the same fully qualified name as the module: nl.amis.j9demo.

This directory contains the module-info.java file. This file specifies:

  • which modules this module depends on
  • which packages it exports (for other modules to create dependencies on)

In my example, the file is very simple – only specifying a dependency on jdk.httpserver:

image

The Java Class MyDemo has a number of imports. Many are for base classes from the java.base module. Note: every Java module has a implicit dependency on java.base, so we do not need to include it in the modue-info.java file.

image

This code create an instance of HttpServer – an object that listens for HTTP Requests at the specified port (80 in this case) and then always returns the same response (the string “This is the response”). As meaningless as that is – the notion of receiving and replying to HTTP Requests in just few lines of Java Code (running in the OpenJDK!) is quite powerful.

package nl.amis.j9demo;
import java.io.*;
import java.net.*;
import java.util.*;
import java.util.concurrent.*;
import java.util.stream.*;
import com.sun.net.httpserver.*;

import static java.lang.System.out;
import static java.net.HttpURLConnection.*;

public class MyDemo{
  private static final int DEFAULT_PORT = 80;
  private static URI ROOT_PATH = URI.create("/");


private static class MyHandler implements HttpHandler {
       public void handle(HttpExchange t) throws IOException {
           URI tu = t.getRequestURI();
           InputStream is = t.getRequestBody();
           // .. read the request body
           String response = "This is the response";
           t.sendResponseHeaders(200, response.length());
           OutputStream os = t.getResponseBody();
           os.write(response.getBytes());
           os.close();
       }
   }


  public static void main(String[] args) throws IOException {
    HttpServer server = HttpServer.create(new InetSocketAddress(DEFAULT_PORT), 0);
    server.createContext("/apps ", new MyHandler());
    server.setExecutor(null); // creates a default executor
    server.start();
    out.println("HttpServer is started, listening at port "+DEFAULT_PORT);
  }

}

Compile, Build and Run

Compile the custom module:

javac -d mods –module-source-path src -m nl.amis.j9demo

image

Create destination directory for JAR file

mkdir -p lib

Create the JAR for the module:

jar –create –file lib/nl-amis-j9demo.jar –main-class nl.amis.j9demo.MyDemo -C mods/nl.amis.j9demo .

image

Inspect the JAR file:

jar tvf lib/nl-amis-j9demo.jar

image

To run the Java application- with a reference to the module:

java –p lib/ -m nl.amis.j9demo

image

the traditional equivalent with a classpath for the JAR file(s) would be:

java -classpath lib/nl-amis-j9demo.jar nl.amis.j9demo.MyDemo

Because port 80 in the container was exposed and mapped to port 8080 on the Docker Host, we can access the Java application from the Docker Host, using wget:

wget 127.0.0.1:8080/apps

image

The response from the Java application is hardly meaningful However, the fact that we get a response at all is quite something: the ‘remote’  container based on openjdk:9 has published an HTTP server from our custom module that we can access from the Docker Host with a simple HTTP request.

Jlink

I tried to use jlink – to create a special runtime for my demo app, consisting of required parts of JDK and my own module. I expect this runtime to be really small.

The JVM modules by the way on my Docker Container are in /docker-java-home/jmods

image

The command for this:

jlink –output mydemo-runtime –module-path lib:/docker-java-home/jmods –limit-modules nl.amis.j9demo –add-modules nl.amis.j9demo –launcher demorun=nl.amis.j9demo –compress=2 –no-header-files –strip-debug

Unfortunately, on my OpenJDK:9 Docker Image, linking failed with this error:

image

Error: java.io.UncheckedIOException: java.nio.file.FileSystemException: mydemo-runtime/legal/jdk.httpserver/ASSEMBLY_EXCEPTION: Protocol error

Resources

Documentation for jlink – https://docs.oracle.com/javase/9/tools/jlink.htm

JavaDoc for HttpServer package – https://docs.oracle.com/javase/9/docs/api/com/sun/net/httpserver/package-summary.html#

Java9 Modularity Part 1 (article on Medium by Chandrakala) – https://medium.com/@chandra25ms/java9-modularity-part1-a102d85e9676

JavaOne 2017 Keynote – Mark Reynolds demoing jlink – https://youtu.be/UNg9lmk60sg?t=1h35m43s

Exploring Java 9 Modularity – https://www.polidea.com/blog/Exploring-Java-9-Java-Platform-Module-System/

The post Java 9 – First baby steps with Modules and jlink appeared first on AMIS Oracle and Java Blog.

JSON manipulation in Java 9 JShell

$
0
0

In this article I will demonstrate how we can work with JSON based data – for analysis, exploration, cleansing and processing – in JShell, much like we do in Python. I work with a JSON document with entries for all sessions at the Oracle OpenWorld 2017 conference (https://raw.githubusercontent.com/lucasjellema/scrape-oow17/master/oow2017-sessions-catalog.json)

The Java 9 SE specification for the JDK does not contain the JSON-P API and libraries for processing JSON. In order to work with JSON-P in JShell, we need to add the libraries – that we first need to find and download.

I have used a somewhat roundabout way to get hold of the required jar-files (but it works in a pretty straightforward manner):

1. Create a pom.xml file with dependencies on JSON-P

image

 

image

 

2. Then run

mvn install dependency:copy-dependencies

as described in this article: https://technology.amis.nl/2017/02/09/download-all-directly-and-indirectly-required-jar-files-using-maven-install-dependencycopy-dependencies/

this will download the relevant JAR files to subdirectory target/dependencies

image

3. Copy JAR files to a directory – that can be accessed from within the Docker container that runs JShell – for me that is the local lib directory that is mapped by Vagrant and Docker to /var/www/lib inside the Docker container that runs JShell.

 

4. In the container that runs JShell:

Start JShell with this statement that makes the new httpclient module available, for when the JSON document is retrieved from an HTTP URL resource:

jshell –add-modules jdk.incubator.httpclient

 

5. Update classpath from within jshell

To process JSON in JShell – using JSON-P – we need set the classpath to include the two jar files that were downloaded using Maven.

/env –class-path /var/www/lib/javax.json-1.1.jar:/var/www/lib/javax.json-api-1.1.jar

Then the classes in JSON-P are imported

import javax.json.*;

if we need to retrieve JSON data from a URL resource, we should also

import jdk.incubator.http.*;

 

6. I have made the JSON document available on the file system.

image

It can be accessed as follows:

InputStream input = new FileInputStream(“/var/www/oow2017-sessions-catalog.json”);

 

7. Parse data from file into JSON Document, get the root object and retrieve the array of sessions:

JsonReader jsonReader = Json.createReader(input)

JsonObject rootJSON = jsonReader.readObject();

JsonArray sessions = rootJSON.getJsonArray(“sessions”);

 

8. Filter sessions with the term SQL in the title and print their title to the System output – using Streams:

sessions.stream().map( p -> (JsonObject)p).filter(s ->  s.getString(“title”).contains(“SQL”)) .forEach( s -> {System.out.println(s.getString(“title”));})

image

 

One other example: show a list of all presentations for which a slidedeck has been made available for download along with the download URL:

sessions.stream()

.map( p -> (JsonObject)p)

.filter(s -> s.containsKey(“files”) && !s.isNull(“files”) && !(s.getJsonArray(“files”).isEmpty()))

.forEach( s -> {System.out.println(s.getString(“title”)+” url:”+s.getJsonArray(“files”).getJsonObject(0).getString(“url”));})

 

Bonus: Do HTTP Request

As an aside some steps in jshell to execute an HTTP request:

jshell> HttpClient client = HttpClient.newHttpClient();
client ==> jdk.incubator.http.HttpClientImpl@4d339552

jshell> HttpRequest request = HttpRequest.newBuilder(URI.create(“http://www.google.com”)).GET().build();
request ==> http://www.google.com GET

jshell> HttpResponse response = client.send(request, HttpResponse.BodyHandler.asString())
response ==> jdk.incubator.http.HttpResponseImpl@147ed70f

jshell> System.out.println(response.body())
<HTML><HEAD><meta http-equiv=”content-type” content=”text/html;charset=utf-8″>
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF=”http://www.google.nl/?gfe_rd=cr&amp;dcr=0&amp;ei=S2XeWcbPFpah4gTH6Lb4Ag”>here</A>.
</BODY></HTML>

 

image

The post JSON manipulation in Java 9 JShell appeared first on AMIS Oracle and Java Blog.

Handle HTTP PATCH request with Java Servlet

$
0
0

The Java Servlet specification does not include handling a PATCH request. That means that class  javax.servlet.http.HttpServlet does not have a doPatch() method, unlike doGet, doPost, doPut etc.

That does not mean that a Java Servlet can not handle PATCH requests. It is quite simple to make it do that.

The trick is overriding the service(request,response) method – and have it respond to PATCH requests (in a special way) and to all other requests in the normal way. Or to do it one step more elegantly:

  1. create an abstract class – MyServlet for example – that extends from HttpServlet, override servce() and add an abstract doPatch() method – that is not supposed to be ever invoked but only be overridden
    package nl.amis.patch.view;
    
    import java.io.IOException;
    
    import javax.servlet.ServletException;
    import javax.servlet.http.HttpServlet;
    import javax.servlet.http.HttpServletRequest;
    import javax.servlet.http.HttpServletResponse;
    
    public abstract class MyServlet extends HttpServlet {
    
        public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
            if (request.getMethod().equalsIgnoreCase("PATCH")){
               doPatch(request, response);
            } else {
                super.service(request, response);
            }
        }
    
        public abstract void doPatch(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException;
    
    }
    
    
  2. any servlet (class) that should handle PATCH requests should extend from this class [MyServlet] and implement the doPatch() method

 

package nl.amis.patch.view;

import java.io.IOException;
import java.io.PrintWriter;

import javax.servlet.*;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.*;

@WebServlet(name = "TheServlet", urlPatterns = { "/theservlet" })
public class TheServlet extends MyServlet {
    private static final String CONTENT_TYPE = "text/html; charset=windows-1252";

    public void init(ServletConfig config) throws ServletException {
        super.init(config);
    }

    public void doPatch(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        response.setContentType(CONTENT_TYPE);
        PrintWriter out = response.getWriter();
        out.println("<html>");
        out.println("<head><title>TheServlet</title></head>");
        out.println("<body>");
        out.println("<p>The Servlet has received a PATCH request and will do something meaningful with it! This is the reply.</p>");
        out.println("</body></html>");
        out.close();
    }
}

 

Here is the result of sending a PATCH request from Postman to the Servlet:

 

image

The post Handle HTTP PATCH request with Java Servlet appeared first on AMIS Oracle and Java Blog.

Will you help me build the zoo of programming languages?

$
0
0

Have you ever come across the following challenge? You have to name something; your own project, own product, company, your boat or even your own child. Coming up with the right name is very important since this is something you have worked on for a long time. So the name has to reflect your inspiration and effort. You used your own blood sweat and tears creating this. Spend many long lonely nights to finalize (just forget the child metaphor here). And now you are ready to launch it. But wait….. it has no name. Best way to name something is to find an example in nature. And animals are powerful and good inspirations for names. Here are 14 programming languages, software, and tools who are named after an animal grouped together in my zoo of programming languages. And there are probably many more. Feel free to help me and add yours as comments on this article.

Impala

Image resultThis is one of the major tools for querying a big data database. Impala is a tool for querying big data. Impala is a query engine that runs on Hadoop. Impala offers scalable parallel database technology to Hadoop, enabling users to issue low-latency SQL queries to data stored in HDFS and Apache HBase without requiring data movement or transformation. Impala is integrated with Hadoop to use the same file and data formats, metadata, security and resource management frameworks used by MapReduce, Apache Hive, Apache Pig and other Hadoop software.Image result for impala male
Impala is promoted for analysts and data scientists to perform analytics on data stored in Hadoop via SQL or business intelligence tools. The result is that large-scale data processing (via MapReduce) and interactive queries can be done on the same system using the same data and metadata – removing the need to migrate data sets into specialized systems and/or proprietary formats simply to perform the analysis. https://impala.apache.org/

The other Impala is a medium-sized antelope found in eastern and southern Africa. The sole member of the genus Aepyceros.

Toad

Image result for quest toadToad is a database management toolset from Quest Software that database developers, database administrators, and data analysts use to manage both relational and non-relational databases using SQL. There are Toad products for developers and DBAs, which run on Oracle, SQL Server, IBM DB2 (LUW & z/OS), SAP and MySQL, as well as, a Toad product for data preparation, which supports most data platforms. Toad solutions enable data professionals to automate processes, minimize risks and cut project delivery timelines. https://www.quest.com/toad/Image result for toad

The other toad Is a common name for certain frogs, especially of the family Bufonidae, that is characterized by dry, leathery skin, short legs, and large bumps covering the parotoid glands. Wikipedia

Elk

Image result for elastic stackThe ELK (now called Elastic Stack) stack consists of Elasticsearch, Logstash, and Kibana. Although they’ve all been built to work exceptionally well together, each one is a separate project that is driven by the open-source vendor Elastic—which itself began as an enterprise search platform vendor. It has now become a full-service analytics software company, mainly because of the success of the ELK stack. Wide adoption of Elasticsearch for analytics has been the main driver of its popularity.
Elasticsearch is a juggernaut solution for your data extraction problems. A single developer can use it to find the high-value needles underneath all of your data haystacks, so you can put your team of data scientists to work on another project.  https://en.wikipedia.org/wiki/ElasticsearchImage result for elk

The other elk, or wapiti (Cervus canadensis), is one of the largest species within the deer family, Cervidae, in the world, and one of the largest land mammals in North America and Eastern Asia. This animal should not be confused with the still larger moose (Alces alces) to which the name “elk” applies in British English and in reference to populations in Eurasia.

Ant

Related imageApache Ant is a software tool for automating software build processes, which originated from the Apache Tomcat project in early 2000. It was a replacement for the Make build tool of Unix and was created due to a number of problems with Unix’s make. It is similar to Make but is implemented using the Java language, requires the Java platform, and is best suited to building Java projects. The most immediately noticeable difference between Ant and Make is that Ant uses XML to describe the build process and its dependencies, whereas Make uses Makefile format. By default, the XML file is named build.xml. Ant is an open-source project, released under the Apache License, by Apache Software Foundation. https://ant.apache.org/index.htmlImage result for ant

The other Ant is a eusocial insect of the family Formicidae and, along with the related wasps and bees, belong to the order Hymenoptera. Ants evolved from wasp-like ancestors in the Cretaceous period, about 99 million years ago, and diversified after the rise of flowering plants. More than 12,500 of an estimated total of 22,000 species have been classified. They are easily identified by their elbowed antennae and the distinctive node-like structure that forms their slender waists.

Rhino

Inicio de ldp para 260px50px moziyarinocrnt.jpgThe Rhino project was started at Netscape in 1997. At the time, Netscape was planning to produce a version of Netscape Navigator written fully in Java and so it needed an implementation of JavaScript written in Java. When Netscape stopped work on Javagator, as it was called, the Rhino project was finished as a JavaScript engine. Since then, a couple of major companies (including Sun Microsystems) have licensed Rhino for use in their products and paid Netscape to do so, allowing work to continue on it. https://developer.mozilla.org/en-US/docs/Mozilla/Projects/RhinoImage result for rhino

The other Rhino ( rhinoceros, from Greek rhinokeros, meaning ‘nose-horned’, from rhinos, meaning ‘nose’, and keratos, meaning ‘horn’), commonly abbreviated to rhino, is one of any five extant species of odd-toed ungulates in the family Rhinocerotidae, as well as any of the numerous extinct species. Two of the extant species are native to Africa and three to Southern Asia.

Python

Image result for python softwarePython is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, and a syntax that allows programmers to express concepts in fewer lines of code, notably using significant whitespace. It provides constructs that enable clear programming on both small and large scales. Python features a dynamic type system and automatic memory management. It supports multiple programming paradigms, including object-oriented, imperative, functional and procedural, and has a large and comprehensive standard library. https://en.wikipedia.org/wiki/Python_(programming_language)Image result for python snake

The other Python, is a genus of nonvenomous Pythonidae found in Africa and Asia. Until recently, seven extant species were recognised; however, three subspecies have been promoted and a new species recognized. A member of this genus, Python reticulatus, is among the longest snake species and extant reptiles in the world.

Goat

Related imageWebGoat or GOAT is a deliberately insecure web application maintained by OWASP designed to teach web application security lessons. This program is a demonstration of common server-side application flaws. The exercises are intended to be used by people to learn about application security and penetration testing techniques. https://www.owasp.org/index.php/Category:OWASP_WebGoat_ProjectImage result for goat

The other goat is a member of the family Bovidae and is closely related to the sheep as both are in the goat-antelope subfamily Caprinae. There are over 300 distinct breeds of goat. Goats are one of the oldest domesticated species and have been used for their milk, meat, hair, and skins over much of the world.

Lama

LAMA is a framework for developing hardware-independent, high-performance code for heterogeneous computing systems. It facilitates the development of fast and scalable software that can be deployed on nearly every type of system with a single code base. The framework supports multiple target platforms within a distributed heterogeneous environment. It offers optimized device code on the backend side, high scalability through latency hiding and asynchronous execution across multiple nodes. https://www.libama.org/Image result for lama animal

The other Lama (Lama glama) is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the Pre-Columbian era.
They are very social animals and live with other llamas as a herd. The wool produced by a llama is very soft and lanolin-free. Llamas are intelligent and can learn simple tasks after a few repetitions. When using a pack, they can carry about 25 to 30% of their body weight for 8 to 13 km (5–8 miles).[5]

Serpent

The serpent is one of the high-level programming languages used to write Ethereum contracts. The language, as suggested by its name, is designed to be very similar to Python; it is intended to be maximally clean and simple, combining many of the efficiency benefits of a low-level language with ease-of-use in programming style, and at the same time adding special domain-specific features for contract programming. The latest version of the Serpent compiler, available on GitHub, is written in C++, allowing it to be easily included in any client.

The serpent, or snake, is one of the oldest and most widespread mythological symbols. The word is derived from Latin serpents, a crawling animal or snake. Snakes have been associated with some of the oldest rituals known to humankind and represent the dual expression of good and evil.

Penguin

LogoPENGUIN is a grammar-based language for programming graphical user interfaces. Code for each thread of control in a multi-threaded application is confined to its own module, promoting modularity and reuse of code. Networks of PENGUIN components (each composed of an arbitrary number of modules) can be used to construct large reactive systems with parallel execution, internal protection boundaries, and plug-compatible communication interfaces. We argue that the PENGUIN building-block approach constitutes a more appropriate framework for user interface programming than the traditional Seeheim Model. We discuss the design of PENGUIN and relate our experiences with applications. https://en.wikipedia.org/wiki/Penguin_SoftwareImage result for penguin

The other Penguin(order Sphenisciformes, family Spheniscidae) are a group of aquatic, flightless birds. They live almost exclusively in the Southern Hemisphere, with only one species, the Galapagos penguin, found north of the equator. Highly adapted for life in the water, penguins have countershaded dark and white plumage, and their wings have evolved into flippers. Most penguins feed on krill, fish, squid and other forms of sea life caught while swimming underwater. They spend about half of their lives on land and half in the oceans. Although almost all penguin species are native to the Southern Hemisphere, they are not found only in cold climates, such as Antarctica. In fact, only a few species of penguin live so far south. Several species are found in the temperate zone, and one species, the Galápagos penguin, lives near the equator.

Cheetah

Image result for cheetah template engineCheetah is a Python-powered template engine and code generator. It can be used standalone or combined with other tools and frameworks. Web development is its principle use, but Cheetah is very flexible and is also being used to generate C++ game code, Java, SQL, form emails and even Python code.  Cheetah is an open source template engine and code-generation tool written in Python. Cheetah can be used unto itself or incorporated with other technologies and stacks regardless of whether they’re written in Python or not. https://pythonhosted.org/Cheetah/Image result for cheetah

At its core, Cheetah is a domain-specific language for markup generation and templating which allows for full integration with existing Python code but also offers extensions to traditional Python syntax to allow for easier text-generation.

Porcupine

Image result for porcupine application serverPorcupine is an open-source Python-based Web application server that provides front-end and back-end revolutionary technologies for building modern data-centric Web 2.0 applications. Many of the tasks required for building web applications as you know them, are either eliminated or simplified. For instance, when developing a Porcupine application you don’t have to design a relational database. You only have to design and implement your business objects as Python classes, using the building blocks provided by the framework (data-types). Porcupine integrates a native object – key/value database, therefore the overheads required by an object-relational mapping technique when retrieving or updating a single object are removed. http://www.innoscript.org/Image result for porcupine

The other Porcupines are rodentian mammals with a coat of sharp spines, or quills, that protect against predators. The term covers two families of animals, the Old World porcupines of family Hystricidae, and the New World porcupines of family Erethizontidae. Both families belong to the infraorder Hystricognathi within the profoundly diverse order Rodentia and display superficially similar coats of quills: despite this, the two groups are distinct from each other and are not closely related to each other within the Hystricognathi.

Orca

Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The implementation of Orca takes care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate and/or migrate objects in order to decrease access times to objects and increase parallelism.
programming language for distributed systems http://courses.cs.vt.edu/~cs5314/Lang-Paper-Presentation/Papers/HoldPapers/ORCA.pdfImage result for ORCA

The other orca (Orcinus orca) is a toothed whale belonging to the oceanic dolphin family, of which it is the largest member. Killer whales have a diverse diet, although individual populations often specialize in particular types of prey. Some feed exclusively on fish, while others hunt marine mammals such as seals and dolphins. They have been known to attack baleen whale calves, and even adult whales. Killer whales are apex predators, as there is no animal that preys on them. Killer whales are considered a cosmopolitan species, and can be found in each of the world’s oceans in a variety of marine environments, from Arctic and Antarctic regions to tropical seas – Killer whales are only absent from the Baltic and Black seas, and some areas of the Arctic ocean.

Seagul

SeagullSeagull is an Open Source (GPL) multi-protocol traffic generator test tool. Primarily aimed at IMS (3GPP, TISPAN, CableLabs) protocols (and thus being the perfect complement to SIPp for IMS testing), Seagull is a powerful traffic generator for functional, load, endurance, stress and performance/benchmark tests for almost any kind of protocol. Seagul is a traffic generator for load testing. Created by HP and released in 2006. http://gull.sourceforge.net/Image result for seagull

The other Seagull is a seabird of the family Laridae in the suborder Lari. They are most closely related to the terns (family Sternidae) and only distantly related to auks, skimmers, and more distantly to the waders. Until the 21st century, most gulls were placed in the genus Larus, but this arrangement is now known to be polyphyletic, leading to the resurrection of several genera.

Sloth

Sloth is the world’s slowest computer language. Proudly announced by Lary Page at the 2014 Google WWDC as a reaction on Microsoft C-flat-minor. Both languages are still competing in the race for the slowest computer language. Sloth stands for Seriously Low Optimization ThresHolds, has been under development for a really, really long time. I mean, like, forever, man. https://www.eetimes.com/author.asp?doc_id=1322644

Larry Page at the recent WWDC introducing SLOTH.

The other Sloths are arboreal mammals noted for the slowness of movement and for spending most of their lives hanging upside down in the trees of the tropical rainforests of South America and Central America. The six species are in two families: two-toed sloths and three-toed sloths. In spite of this traditional naming, all sloths actually have three toes. The two-toed sloths have two digits, or fingers, on each forelimb. The sloth is so named because of its very low metabolism and deliberate movements, sloth being related to the word slow.

Image result for sloth

 

Add your languages

Hope you enjoyed this small tour. There are probably many more languages named after animals. Please add them as comments and I will update the article. Hopefully, we can cover the entire animal kingdom. Thank you in advance for your submissions.

Sources from Wikipedia.

 

The post Will you help me build the zoo of programming languages? appeared first on AMIS Oracle and Java Blog.

Getting started with Spring Boot microservices. Why and how.

$
0
0

In order to quickly develop microservices, Spring Boot is a common choice. Why should I be interested in Spring Boot? In this blog post I’ll give you some reasons why looking at Spring Boot is interesting and give some samples on how to get started quickly. I’ll shortly talk about microservices, move on to Spring Boot and end with Application Container Cloud Service which is an ideal platform to run and manage your Spring Boot applications on. This blog touches many subjects but they fit together nicely. You can view the code of my sample Spring Boot project here. Most of the Spring Boot knowledge has been gained by the following free course by Java Brains.

Microservices

Before we go deeper into why Spring Boot for microservices, we of course first need to know what microservices are. An easy question to ask but a little complex to answer in a few lines in this blog. One of the first people describing characteristics of microservices and actually calling them that was Martin Fowler in 2014. What better source to go back to then the articles he has written. For example here.

‘In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.’

— James Lewis and Martin Fowler

Of course there are a lot of terms involved in this definition

  • It is an architectural style for developing a single application.
  • A suite of small services each running in its own process.
  • Communicating with lightweight mechanisms, often HTTP.
  • Build around business capabilities. Look up bounded context.
  • A bare minimum of centralized management of these services. This implies no application server which provides centralized management of the applications running on it.
  • May be written in different programming languages or use different storage technologies.

A microservice architectural style also has several characteristics. It is very interesting to look at such an architecture in more detail like for example the OMESA initiative to help you get started. As is of course obvious and true with all architectural styles, you will gain most benefits when doing it right. It is however often not trivial to determine ‘right’.

Spring Boot microservices

Spring Boot features and microservice principles

Spring Boot is based on certain principles which align with microservice architecture. The primary goals of Spring Boot:

  • Provide a radically faster and widely accessible getting started experience for all Spring development.
  • Be opinionated out of the box, but get out of the way quickly as requirements start to diverge from the defaults.
  • Provide a range of non-functional features that are common to large classes of projects (e.g. embedded servers, security, metrics, health checks, externalized configuration).
  • Absolutely no code generation and no requirement for XML configuration.

The features provided by Spring Boot also make it a good fit to implement microservices in.

  • Spring Boot applications can contain an embedded Tomcat server. This is a completely standalone Tomcat container which has its configuration as being part of the application.
  • Spring Boot is very well suited to create light weight JSON/REST services.
  • Features like health checks are provided. Spring Boot offers Actuator. A set of REST services which allow monitoring and management. Look here. Also externalized configuration can be used. Few centralized management features are required.
  • Since different storage techniques can be used, Spring provides Spring Data JPA. JPA is Java Persistence API. This API provides ORM capabilities to make working with relational databases easier (mostly vendor independent, supports EclipseLink, Hibernate and several others).

Example of an Actuator call to request health status

Easy to implement API design patterns

There are plenty of descriptions online to provide API design guidelines. See for example here. An example API URL can be something like: http://api.yourservice.com/v1/companies/34/employees. Notice the structure of the URL which amongst other things contains a version number. Oracle Mobile Cloud Service documentation also has several design recommendations. See here. These design considerations are of course easily implemented in Spring Boot.

See for example the below code sample:

A simple Spring Boot controller

You can see how the HTTP operations are used and the way method calls are mapped to URLs. Added benefit of this sample is that it also shows how to access the body of the request message.

Integration with backend systems

Spring Boot integrates with JPA. JPA provides an API to easily do ORM. It allows you to work with objects in Java which are backed by database data. For basic CRUD operations, the effort required to implement JPA in Spring Boot is minimal.

You only need three things to do simple CRUD operations when using the embedded Derby database.

  • An annotated entity. You only require two annotations inside your POJO. @Entity to annotate the class and @Id to indicate the variable holding primary key.
  • A repository interface extending CrudRepository (from org.springframework.data.repository)
  • Inside your service, you can use the @Autowired annotation to create a local variable with an instance of the repository.

Connection details for the embedded Derby server are not required. They are for external databases though. If you want to connect to an Oracle database, read the following here.

Pretty comparable to microservices on Node

Node or Spring Boot? This is of course a topic which has many opinions. Many blogs have been written to compare the 2. See for example here.

In several aspects, Spring Boot beats Node.js.

  • Performance. Read the following article here. Spring Boot microservices can achieve higher throughput than similar services on Node.js.
  • Maturity. Spring has a long history of running Enterprise Applications. Node.js can also be used but is less mature.
  • Security. Spring and Spring Boot are clearly better than Node.js. For example, Kerberos support in Node is limited while Spring Boot provides easy abstractions for several security implementations amongst which Kerberos tokens.
  • RDBMS. This is more easy to use in Spring Boot because of JPA.

Node.js beats Spring Boot also in several aspects

  • Build/package management. People who have experience with Maven and NPM often prefer NPM
  • UI. JavaScript is of course the language of choice for front-end applications. The Java based frameworks such as the JSF variants by far do not have the productivity as for example a framework like AngularJS.
  • Document databases like MongoDB. When you can work with JSON, JavaScript code running on Node.js makes it very easy to interact with the database.

Spring Boot, being in the Java ecosystem can also be combined with for example Ratpack. See here. Ratpack provides a high throughput, non-blocking web layer. The syntax is similar to how you would code Node.js code. This is of course not so much of an argument for Spring Boot since modules on Node.js provides similar functionality. Both solutions are more alike than you would think on first glance.

It depends probably mainly on the skills you have available and your application landscape if you would choose Node.js or Spring Boot. If you’re from the JavaScript world, you might prefer to write your microservices on Node.js. If you’re from the Java world, you will prefer Spring Boot. It is important to understand there is not an obvious superior choice whether to go for Node.js or Spring Boot.

Getting started with Spring Boot

The easiest way to get started is first watch some online courses. For example this one from Java Brains. I’ll provide some nice to knows below.

Spring Tool Suite (STS)

As for an IDE, every Java IDE will do, however, since Spring Boot is build on top of Spring, you could consider using Spring Tool Suite (STS). This is a distribution of Eclipse with many specific Spring features which make development of Spring applications easier.

Spring Initializr

An alternative way to get your start project is to go to https://start.spring.io/, indicate your dependencies and click the Generate project button. This will generate a Maven or Gradle project for you with the required dependencies already added.

With STS, you can also use the Spring Initializr functionality easily.

Spring Boot CLI

Spring Boot CLI offers features to create and run Groovy Spring Boot applications. Groovy requires less code than Java to do similar things. It is a script language which runs on the JVM and from Groovy you can access regular Java classes/libraries.

You can for example create an application like:

@RestController
class HelloWorldClass {

@RequestMapping(“/”)
String home() {
return “Hello World!”
}
}

Save this as a Groovy script (e.g. app.groovy) and run it with Spring Boot CLI like: spring run app.groovy

Getting actually started

To get started with Spring Boot, you have to add some entries to your pom.xml file and you’re ready to go. Easiest is to use the New Spring Starter project from STS since it will generate a pom, a main and test class for you. That is what I used for my sample project here.

A simple pom.xml to get started with Spring Boot

Spring and Oracle

Spring is a very common Java framework. You can find traces of it in several Oracle products and features. Below some examples. If you look in other Oracle products, especially those who are Java based, I expect you will find many more examples.

SOA Suite

For example in Oracle SOA Suite.

  • SOA Suite itself under the covers uses Spring
  • SOA Suite can use Spring components

Massive Open Online Course

Oracle uses Spring Boot in courses it provides. For example in the Develop RESTful Java Microservices deployable on Oracle Cloud MOOC.

Application Container Cloud Service (ACCS)

ACCS has been around for a while. Together with Spring Boot, they provide an ideal combination to get your microservices developed and running quickly.

Application Container Cloud Service provides the features of The Twelf-Factor App out of the cloudy box your application so you don’t have to develop these yourself. These of course also align with the microservice principles like executing apps as stateless processes.

If you want to use ACCS with Spring Boot, there are two ways you can deploy your Spring Boot application.

  • You can create a WAR file by specifying war in the packaging tag in the pom.xml file. Next you can deploy this WAR file as a Java EE Web Application. This runs WebLogic in the background. Read more here.
  • You can create a JAR file by specifying jar in the packaging tag in the pom.xml file. Next you can run this JAR file directly since you’ll get an embedded Tomcat with it and can run it as a Java SE application. The configuration will be part of the application here.

I’ve not compared both options in detail but I can imagine if you want to run a ‘micro’-service, an entire WebLogic Server might make it more of a ‘macro’-service.

The post Getting started with Spring Boot microservices. Why and how. appeared first on AMIS Oracle and Java Blog.

Automate calls to SOAP and REST webservices using simple Python scripts

$
0
0

Probably not many people will tell you running batches over webservices is a good idea. Sometimes though, it can be handy to have a script available to generate webservice calls based on a template message with variables and automate processing the response messages. In addition, if you have a large number of calls, executing the calls in parallel might save you a lot of time if your service platform can handle the concurrency.

Scripts such as this might help bridging part of the gap between the old fashioned batch oriented world and the service world. You can for example use it to call services based on a text-file export from an old system to create certain entities in a modern system which offers APIs. Scripts such as these should of course not be used to actually perform structured regular integration but are valuable as one-off solutions. The provided scripts of course come with no warranties or guarantees of any nature. Most likely you will need to make them suitable for your specific use-case.

Python setup

The scripts below require Python 3.6 with the requests module installed (pip install requests). The other used modules are present by default in your usual Python installations. I’ve used PyCharm by JetBrains as IDE. Without IDE, you can also install Python 3.6 from python.org here to just run the script.

Performing calls to SOAP services

SOAP services use the same endpoint for all requests. Based on the operation (the SOAPAction HTTP header), a different part of the service is executed. The message contents can differ. For example, you want to call a service for a list of different customers. A template message is ideally suited for such a case. The below Python 3.6 script will do just that. Generate messages based on a template and an input file and fire them at a service endpoint with a specified number of concurrent threads. After the response is received, it is parsed and a specific field from the response is saved in an output file. Errors are saved in a separate file.

You can view the script here. The script is slightly more than 50 lines and contains Python samples of (among other things):

  • How to execute SOAP calls (POST request with HTTP headers) with the requests module
  • How to work with a message template and variables with string.Template
  • How to concurrently execute calls with concurrent.futures
  • How to parse SOAP responses with xml.etree.ElementTree

The line Maarten in input.txt will give you Maarten : Hello Maarten in the outputok.txt file if the call succeeded. I’ve used a simple SOA Suite test service which you can also find in the mentioned directory.

Performing calls to REST services

When working with REST services, usually the URL contains variables. In this example I’m calling an online and publicly available API at the Dutch Chamber of Commerce to search for companies based on their file number (KvK number). When I receive the result, I’ll look if it is found and only has a single location. In other cases, I’ll consider it an error.

You can view the script here. It contains samples of (among other things) on how you can do URL manipulation and GET requests. The parsing of the response for this sample is extremely simple. I just check if the result document contains specific text strings. For a ‘real’ REST service you might want to do some more thorough JSON parsing. For this example, I’ve kept the code as simple/short as possible.

The post Automate calls to SOAP and REST webservices using simple Python scripts appeared first on AMIS Oracle and Java Blog.

Java: How to fix Spring @Autowired annotation not working issues

$
0
0

Spring is a powerful framework, but it requires some skill to use efficiently. When I started working with Spring a while ago (actually Spring Boot to develop microservices) I encountered some challenges related to dependency injection and using the @Autowired annotation. In this blog I’ll explain the issues and possible solutions. Do note that since I do not have a long history with Spring, the provided solutions might not be the best ones.

Introduction @Autowired

In Spring 2.5 (2007), a new feature became available, namely the @Autowired annotation. What this annotation basically does is provide an instance of a class when you request it in for example an instance variable of another class. You can do things like:


@Autowired
MyClass myClass;

This causes myClass to automagically be assigned an instance of MyClass if certain requirements are met.

How does it know which classes can provide instances? The Spring Framework does this by performing a scan of components when the application starts. In Spring Boot the @SpringBootApplication provides this functionality. You can use the @ComponentScan annotation to tweak this behavior if you need to. Read more here.

The classes of which instances are acquired, also have to be known to the Spring framework (to be picked up by the ComponentScan) so they require some Spring annotation such as @Component, @Repository, @Service, @Controller, @Configuration. Spring manages the life-cycle of instances of those classes. They are known in the Spring context and can be used for injection.

Order of execution

When a constructor of a class is called, the @Autowired instance variables do not contain their values yet. If you are dependent on them for the execution of specific logic, I suggest you use the @PostConstruct annotation. This annotation allows a specific method to be executed after construction of the instance and also after all the @Autowired instances have been injected.

Multiple classes which fit the @Autowired bill

If you create an instance of a class implementing an interface and there are multiple classes implementing that interface, you can use different techniques to let it determine the correct one. Read here.

You can indicate a @Primary candidate for @Autowired. This sets a default class to be wired. Some other alternatives are to use @Resource, @Qualifier or @Inject. Read more here. @Autowired is Spring specific. The others are not.

You can for example name a @Component like:


@Component("beanName1")
public class MyClass1 implements InterfaceName {
}

@Component("beanName2")
public class MyClass2 implements InterfaceName {
}

And use it in an @Autowired like


@Autowired
@Qualifier("beanName1")
InterfaceName myImpl;

MyImpl will get an instance of MyClass1

When @Autowired doesn’t work

There are several reasons @Autowired might not work.

When a new instance is created not by Spring but by for example manually calling a constructor, the instance of the class will not be registered in the Spring context and thus not available for dependency injection. Also when you use @Autowired in the class of which you created a new instance, the Spring context will not be known to it and thus most likely this will also fail.
Another reason can be that the class you want to use @Autowired in, is not picked up by the ComponentScan. This can basically be because of two reasons.

  • The package is outside the ComponentScan search path. Move the package to a scanned location or configure the ComponentScan to fix this.
  • The class in which you want to use @Autowired does not have a Spring annotation. Add one of the following annotatons to the class: @Component, @Repository, @Service, @Controller, @Configuration. They have different behaviors so choose carefully! Read more here.

Instances created not by Spring

Autowired is cool! It makes certain things very easy. Instances created not by Spring are a challenge and stand between you and @Autowired. How do you deal with this?

Do not create your own instances; let Spring handle it

If you can do this (refactor), it is the easiest way to go. If you need to deal with instances created not by Spring, there are some workarounds available below, but most likely, they will have unexpected side-effects. It is easy to add Spring annotations, have the class be picked up by the ComponentScan and let instances be @Autowired when you need it. This avoids you having to create new instances regularly or having to forward them through a call stack.

Not like this


//Autowired annotations will not work inside MyClass. Other classes who want to use MyClass have to create their own instances or you have to forward this one.

public class MyClass {
}

public class MyParentClass {
MyClass myClass = new MyClass();
}

But like this

Below how you can refactor this in order to Springify it.


//@Component makes sure it is picked up by the ComponentScan (if it is in the right package). This allows @Autowired to work in other classes for instances of this class
@Component
public class MyClass {
}

//@Service makes sure the @Autowired annotation is processed
@Service
public class MyParentClass {
//myClass is assigned an instance of MyClass
@Autowired
MyClass myClass;
}

Manually force Autowired to be processed

If you want to manually create a new instance and force the @Autowired annotation used inside it to be processed, you can obtain the  SpringApplicationContext (see here) and do the following (from here):


B bean = new B();
AutowireCapableBeanFactory factory = applicationContext.getAutowireCapableBeanFactory();
factory.autowireBean( bean );
factory.initializeBean( bean, "bean" );

initializeBean processes the PostConstruct annotation. There is some discussion though if this does not break the inversion of control principle. Read for example here.

Manually add the bean to the Spring context

If you not only want the Autowired annotation to be processed inside the bean, but also make the new instance available to be autowired to other instances, it needs to be present in the SpringApplicationContext. You can obtain the SpringApplicationContext by implementing ApplicationContextAware (see here) and use that to register the bean. A nice example of such a ‘dynamic Spring bean’ can be found here and here. There are other flavors which provide pretty similar functionality. For example here.

The post Java: How to fix Spring @Autowired annotation not working issues appeared first on AMIS Oracle and Java Blog.


Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application

$
0
0

Spring Boot allows you to quickly develop microservices. Application Container Cloud Service (ACCS) allows you to easily host Spring Boot applications. Oracle provides an Application Cache based on Coherence which you can use from applications deployed to ACCS. In order to use the Application Cache from Spring Boot, Oracle provides an open source Java SDK. In this blog post I’ll give an example on how you can use the Application Cache from Spring Boot using this SDK. You can find the sample code here.

Using the Application Cache Java SDK

Create an Application Cache

You can use a web-interface to easily create a new instance of the Application Cache. A single instance can contain multiple caches. A single application can use multiple caches but only a single cache instance. Multiple applications can use the same cache instance and caches. Mind that the application and the application cache are deployed in the same region in order to allow connectivity. Also do not use the ‘-‘ character in your cache name, since the LBaaS configuration will fail.

Use the Java SDK

Spring Boot applications commonly use an architecture which defines abstraction layers. External resources are exposed through a controller. The controller uses services. These services provide operations to execute specific tasks. The services use repositories for their connectivity / data access objects. Entities are the POJO’s which are exchanged/persisted and exposed for example as REST in a controller. In order to connect to the cache, the repository seems like a good location. Which repository to use (a persistent back-end like a database or for example the application cache repository) can be handled by the service. Per operation this can differ. Get operations for example might directly use the cache repository (which could use other sources if it can’t find its data) while you might want to do Put operations in both the persistent backend as well as in the cache. See for an example here.

In order to gain access to the cache, first a session needs to be established. The session can be obtained from a session provider. The session provider can be a local session provider or a remote session provider. The local session provider can be used for local development. It can be created with an expiry which indicated the validity period of items in the cache. When developing / testing, you might try setting this to ‘never expires’ since else you might not be able to find entries which you expect to be there. I have not looked further into this issue or created a service request for it. Nor do I know if this is only an issue with the local session provider. See for sample code here or here.

When creating a session, you also need to specify the protocol to use. When using the Java SDK, you can (at the moment) choose from GRPC and REST. GRPC might be more challenging to implement without an SDK in for example Node.js code, but I have not tried this. I have not compared the performance of the 2 protocols. Another difference is that the application uses different ports and URLs to connect to the cache. You can see how to determine the correct URL / protocol from ACCS environment variables here.

The ACCS Application Cache Java SDK allows you to add a Loader and a Serializer class when creating a Cache object. The Loader class is invoked when a value cannot be found in the cache. This allows you to fetch objects which are not in the cache. The Serializer is required so the object can be transferred via REST or GRPC. You might do something like below.

Injection

Mind that when using Spring Boot you do not want to create instances of objects by directly doing something like: Class bla = new Class(). You want to let Spring handle this by using the @Autowired annotation.

Do mind though that the @Autowired annotation assigns instances to variables after the constructor of the instance is executed. If you want to use the @Autowired variables after your constructor but before executing other methods, you should put them in a @PostConstruct annotated method. See also here. See for a concrete implemented sample here.

Connectivity

The Application cache can be restarted at certain times (e.g. maintenance like patching, scaling) and there can be connectivity issues due to other reasons. In order to deal with that it is a good practice to make the connection handling more robust by implementing retries. See for example here.

Deploy a Spring Boot application to ACCS

Create a deployable

In order to deploy an application to ACCS, you need to create a ZIP file in a specific format. In this ZIP file there should at least be a manifest.json file which describes (amongst other things) how to start the application. You can read more here. If you have environment specific properties, binding information (such as which cache to use) and environment variables, you can create a deployment.json file. In addition to those metadata files, there of course needs to be the application itself. In case of Spring Boot, this is a large JAR file which contains all dependencies. You can create this file with the spring-boot-maven-plugin. The ZIP itself is most easily composed with the maven-assembly-plugin.

Deploy to ACCS

There are 2 major ways (next to directly using the API’s with for example CURL) in which you can deploy to ACCS. You can do this manually or use the Developer Cloud Service. The process to do this from Developer Cloud Service is described here. This is quicker (allows redeployment on Git commit for example) and more flexible. Below globally describes the manual procedure. An important thing to mind is that if you deploy the same application under the same name several times, you might encounter issues with the application not being replaced with a new version. In this case you can do 2 things. Deploy under a different name every time. The name of the application however is reflected in the URL and this could cause issues with users of the application. Another way is to remove files from the Storage Cloud Service before redeployment so you are sure the deployable is the most recent version which ends up in ACCS.

Manually

Create a new Java SE application.

 

Upload the previously created ZIP file

References

Introducing Application Cache Client Java SDK for Oracle Cloud

Caching with Oracle Application Container Cloud

Complete working sample Spring Boot on ACCS with Application cache (as soon as a SR is resolved)

A sample of using the Application Cache Java SDK. Application is Jersey based

The post Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application appeared first on AMIS Oracle and Java Blog.

Create a Node JS application for Downloading sources from GitHub

$
0
0

My objective: create a Node application to download sources from a repository on GitHub. I want to use this application to read a simple package.json-like file (that describes which reusable components (from which GitHub repositories) the application has dependencies on) and download all required resources from GitHub and store them in the local file system. This by itself may not seem very useful. However, this is a stepping stone on the road to a facility to trigger run time update of appliation components triggered by GitHub WebHook triggers.

I am making use of the Octokit Node JS library to interact with the REST APIs of GitHub. The code I have created will:

  • fetch the meta-data for all items in the root folder of a GitHub Repo (at the tip of a specific branch, or at a specific tag or commit identifier)
  • iterate over all items:
    • download the contents of the item if it is a file and create a local file with the content (and cater for large files and for binary files)
    • create a local directory for each item in the GitHub repo that is a diectory, then recursively process the contents of the directory on GitHub

An example of the code in action:

A randomly selected GitHub repo (at https://github.com/lucasjellema/WebAppIframe2ADFSynchronize):

image

The local target directory is empty at the beginning of the action:

SNAGHTML8180706

Run the code:

image

And the content is downloaded and written locally:

image

Note: the code could easily provide an execution report with details such as file size, download, last change date etc. It is currently very straightforward. Note: the gitToken is something you need to get hold of yourself in the GitHub dashboard: https://github.com/settings/tokens . Without a token, the code will still work, but you will be bound to the GitHub rate limit (of about 60 requests per hour).

const octokit = require('@octokit/rest')() 
const fs = require('fs');

var gitToken = "YourToken"

octokit.authenticate({
    type: 'token',
    token: gitToken
})

var targetProjectRoot = "C:/data/target/" 
var github = { "owner": "lucasjellema", "repo": "WebAppIframe2ADFSynchronize", "branch": "master" }

downloadGitHubRepo(github, targetProjectRoot)

async function downloadGitHubRepo(github, targetDirectory) {
    console.log(`Installing GitHub Repo ${github.owner}\\${github.repo}`)
    var repo = github.repo;
    var path = ''
    var owner = github.owner
    var ref = github.commit ? github.commit : (github.tag ? github.tag : (github.branch ? github.branch : 'master'))
    processGithubDirectory(owner, repo, ref, path, path, targetDirectory)
}

// let's assume that if the name ends with one of these extensions, we are dealing with a binary file:
const binaryExtensions = ['png', 'jpg', 'tiff', 'wav', 'mp3', 'doc', 'pdf']
var maxSize = 1000000;
function processGithubDirectory(owner, repo, ref, path, sourceRoot, targetRoot) {
    octokit.repos.getContent({ "owner": owner, "repo": repo, "path": path, "ref": ref })
        .then(result => {
            var targetDir = targetRoot + path
            // check if targetDir exists 
            checkDirectorySync(targetDir)
            result.data.forEach(item => {
                if (item.type == "dir") {
                    processGithubDirectory(owner, repo, ref, item.path, sourceRoot, targetRoot)
                } // if directory
                if (item.type == "file") {
                    if (item.size > maxSize) {
                        var sha = item.sha
                        octokit.gitdata.getBlob({ "owner": owner, "repo": repo, "sha": item.sha }
                        ).then(result => {
                            var target = `${targetRoot + item.path}`
                            fs.writeFile(target
                                , Buffer.from(result.data.content, 'base64').toString('utf8'), function (err, data) { })
                        })
                            .catch((error) => { console.log("ERROR BIGGA" + error) })
                        return;
                    }// if bigga
                    octokit.repos.getContent({ "owner": owner, "repo": repo, "path": item.path, "ref": ref })
                        .then(result => {
                            var target = `${targetRoot + item.path}`
                            if (binaryExtensions.includes(item.path.slice(-3))) {
                                fs.writeFile(target
                                    , Buffer.from(result.data.content, 'base64'), function (err, data) { reportFile(item, target) })
                            } else
                                fs.writeFile(target
                                    , Buffer.from(result.data.content, 'base64').toString('utf8'), function (err, data) { if (!err) reportFile(item, target); else console.log('Fuotje ' + err) })

                        })
                        .catch((error) => { console.log("ERROR " + error) })
                }// if file
            })
        }).catch((error) => { console.log("ERROR XXX" + error) })
}//processGithubDirectory

function reportFile(item, target) {
    console.log(`- installed ${item.name} (${item.size} bytes )in ${target}`)
}

function checkDirectorySync(directory) {
    try {
        fs.statSync(directory);
    } catch (e) {
        fs.mkdirSync(directory);
        console.log("Created directory: " + directory)
    }
}

Resources

Octokit REST API Node JS library: https://github.com/octokit/rest.js 

API Documentation for Octokit: https://octokit.github.io/rest.js/#api-Repos-getContent

The post Create a Node JS application for Downloading sources from GitHub appeared first on AMIS Oracle and Java Blog.

Node & Express application to proxy HTTP requests – simply forwarding the response to the original caller

$
0
0

The requirement is simple: a Node JS application that receives HTTP requests and forwards (some of) them to other hosts and subsequently the returns the responses it receives to the original caller.

image

This can be used in many situations – to ensure all resources loaded in a web application come from the same host (one way to handle CORS), to have content in IFRAMEs loaded from the same host as the surrounding application or to allow connection between systems that cannot directly reach each other. Of course, the proxy component does not have to be the dumb and mute intermediary – it can add headers, handle faults, perform validation and keep track of the traffic. Before you know it, it becomes an API Gateway…

In this article a very simple example of a proxy that I want to use for the following purpose: I create a Rich Web Client application (Angular, React, Oracle JET) – and some of the components used are owned and maintained by an external party. Instead of adding the sources to the server that serves the static sources of the web application, I use the proxy to retrieve these specific sources from their real origin (either a live application, a web server or even a Git repository). This allows me to have the latets sources of these components at any time, without redeploying my own application.

The proxy component is of course very simple and straightforward. And I am sure it can be much improved upon. For my current purposes, it is good enough.

The Node application consists of file www that is initialized with npm start through package.json. This file does some generic initialization of Express (such as defining the port on which the listen). Then it defers to app.js for all request handling. In app.js, a static file server is configured to serve files from the local /public subdirectory (using express.static).

www:

var app = require('../app');
var debug = require('debug')(' :server');
var http = require('http');

var port = normalizePort(process.env.PORT || '3000');
app.set('port', port);
var server = http.createServer(app);
server.listen(port);
server.on('error', onError);
server.on('listening', onListening);

function normalizePort(val) {
var port = parseInt(val, 10);

if (isNaN(port)) {
// named pipe
return val;
}

if (port >= 0) {
// port number
return port;
}

return false;
}

function onError(error) {
if (error.syscall !== 'listen') {
throw error;
}

var bind = typeof port === 'string'
? 'Pipe ' + port
: 'Port ' + port;

// handle specific listen errors with friendly messages
switch (error.code) {
case 'EACCES':
console.error(bind + ' requires elevated privileges');
process.exit(1);
break;
case 'EADDRINUSE':
console.error(bind + ' is already in use');
process.exit(1);
break;
default:
throw error;
}
}

function onListening() {
var addr = server.address();
var bind = typeof addr === 'string'
? 'pipe ' + addr
: 'port ' + addr.port;
debug('Listening on ' + bind);
}

package.json:

{
"name": "jet-on-node",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www"
},
"dependencies": {
"body-parser": "~1.18.2",
"cookie-parser": "~1.4.3",
"debug": "~2.6.9",
"express": "~4.15.5",
"morgan": "~1.9.0",
"pug": "2.0.0-beta11",
"request": "^2.85.0",
"serve-favicon": "~2.4.5"
}
}

app.js:

var express = require('express');
var path = require('path');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');

const http = require('http');
const url = require('url');
const fs = require('fs');
const request = require('request');

var app = express();
// uncomment after placing your favicon in /public
//app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
app.use(logger('dev'));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
app.use(cookieParser());

// define static resource server from local directory public (for any request not otherwise handled)
app.use(express.static(path.join(__dirname, 'public')));

app.use(function (req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});

// catch 404 and forward to error handler
app.use(function (req, res, next) {
var err = new Error('Not Found');
err.status = 404;
next(err);
});

// error handler
app.use(function (err, req, res, next) {
// set locals, only providing error in development
res.locals.message = err.message;
res.locals.error = req.app.get('env') === 'development' ? err : {};

// render the error page
res.status(err.status || 500);
res.json({
message: err.message,
error: err
});
});

module.exports = app;

Then the interesting bit: requests for URL /js/jet-composites/* are intercepted: instead of having those requests also handle by serving local resources (from directory public/js/jet-composites/*), the requests are interpreted and routed to an external host. The responses from that host are returned to the requester. To the requesting browser, there is no distinction between resources served locally as static artifacts from the local file system and resources retrieved through these redirected requests.

// any request at /js/jet-composites (for resouces in that folder)
// should be intercepted and redirected
var compositeBasePath = '/js/jet-composites/'
app.get(compositeBasePath + '*', function (req, res) {
var requestedResource = req.url.substr(compositeBasePath.length)
// parse URL
const parsedUrl = url.parse(requestedResource);
// extract URL path
let pathname = `${parsedUrl.pathname}`;
// maps file extention to MIME types
const mimeType = {
'.ico': 'image/x-icon',
'.html': 'text/html',
'.js': 'text/javascript',
'.json': 'application/json',
'.css': 'text/css',
'.png': 'image/png',
'.jpg': 'image/jpeg',
'.wav': 'audio/wav',
'.mp3': 'audio/mpeg',
'.svg': 'image/svg+xml',
'.pdf': 'application/pdf',
'.doc': 'application/msword',
'.eot': 'appliaction/vnd.ms-fontobject',
'.ttf': 'aplication/font-sfnt'
};

handleResourceFromCompositesServer(res, mimeType, pathname)
})

async function handleResourceFromCompositesServer(res, mimeType, requestedResource) {
var reqUrl = "http://yourhost:theport/applicationURL/" + requestedResource
// fetch resource and return
var options = url.parse(reqUrl);
options.method = "GET";
options.agent = false;

// options.headers['host'] = options.host;
http.get(reqUrl, function (serverResponse) {
console.log('<== Received res for', serverResponse.statusCode, reqUrl); console.log('\t-> Request Headers: ', options);
console.log(' ');
console.log('\t-> Response Headers: ', serverResponse.headers);

serverResponse.pause();

serverResponse.headers['access-control-allow-origin'] = '*';

switch (serverResponse.statusCode) {
// pass through. we're not too smart here...
case 200: case 201: case 202: case 203: case 204: case 205: case 206:
case 304:
case 400: case 401: case 402: case 403: case 404: case 405:
case 406: case 407: case 408: case 409: case 410: case 411:
case 412: case 413: case 414: case 415: case 416: case 417: case 418:
res.writeHeader(serverResponse.statusCode, serverResponse.headers);
serverResponse.pipe(res, { end: true });
serverResponse.resume();
break;

// fix host and pass through.
case 301:
case 302:
case 303:
serverResponse.statusCode = 303;
serverResponse.headers['location'] = 'http://localhost:' + PORT + '/' + serverResponse.headers['location'];
console.log('\t-> Redirecting to ', serverResponse.headers['location']);
res.writeHeader(serverResponse.statusCode, serverResponse.headers);
serverResponse.pipe(res, { end: true });
serverResponse.resume();
break;

// error everything else
default:
var stringifiedHeaders = JSON.stringify(serverResponse.headers, null, 4);
serverResponse.resume();
res.writeHeader(500, {
'content-type': 'text/plain'
});
res.end(process.argv.join(' ') + ':\n\nError ' + serverResponse.statusCode + '\n' + stringifiedHeaders);
break;
}

console.log('\n\n');
});
}

Resources

Express Tutorial Part 2: Creating a skeleton website - https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/skeleton_website

Building a Node.js static file server (files over HTTP) using ES6+ - http://adrianmejia.com/blog/2016/08/24/Building-a-Node-js-static-file-server-files-over-HTTP-using-ES6/

How To Combine REST API calls with JavaScript Promises in node.js or OpenWhisk - https://medium.com/adobe-io/how-to-combine-rest-api-calls-with-javascript-promises-in-node-js-or-openwhisk-d96cbc10f299

Node script to forward all http requests to another server and return the response with an access-control-allow-origin header. Follows redirects. - https://gist.github.com/cmawhorter/a527a2350d5982559bb6

5 Ways to Make HTTP Requests in Node.js - https://www.twilio.com/blog/2017/08/http-requests-in-node-js.html

The post Node & Express application to proxy HTTP requests – simply forwarding the response to the original caller appeared first on AMIS Oracle and Java Blog.

Simple CQRS – Tweets to Apache Kafka to Elastic Search Index using a little Node code

$
0
0

Put simply – CQRS (Command Query Responsibility Segregation) is an architecture pattern that recognizes the fact that it may be wise to separate the database that processes data manipulations from the engines that handle queries. When data retrieval requires special formats, scale, availability, TCO, location, search options and response times, it is worth considering introducing additional databases to handle those specific needs. These databases can provide data in a way that caters for the special needs to special consumers – by offering data in filtered, preprocessed format or shape or aggregation, with higher availability, at closer physical distance, with support for special search patterns and with better performance and scalability.

A note of caution: you only introduce CQRS in a system if there is a clear need for it. Not because you feel obliged to implement such a shiny, much talked about pattern or you feel as if everyone should have it. CQRS is not a simple thing – especially in existing systems, packaged applications and legacy databases. Detecting changes and extracting data from the source, transporting and converting the data and applying the data in a reliable, fast enough way with the required level of consistency is not trivial.

In many of my conference presentations, I show demonstrations with running software. To better clarify what I am talking about, to allow the audience to try things out for themselves and because doing demos usually is fun. And a frequent element in these demos is Twitter. Because it is well known and because it allows the audience to participate in the demo. I can invite an audience to tweet using an agreed hashtag and their tweets trigger the demo or at least make an appearance. In this article, I discuss one of these demos – showing an example of CQRS. The picture shows the outline: tweets are consumed by a Node application. Each tweet is converted to an event on a Kafka Topic. This event is consumed by a second Node application (potentially one of multiple instances in Kafka Consumer Group, to allow for more scalability. This Node application creates a new record in an Elastic Search index – the Query destination in this little CQRS spiel.  The out of the box dashboard tool Kibana allows us to quickly inspect and analyse the tweet records. Additionally we can create an advanced query service on top of Elastic Search.

This article shows the code behind this demo. This code as prepared for the JEEConf 2018 conference in Kyiv, Ukraine – and can be found in GitHub: https://github.com/lucasjellema/50-shades-of-data-jeeconf2018-kyiv/tree/master/twitter-kafka-demo .

image

The main elements in the demo:

1. Kafka Topic tweets-topic (in my demo, this topic is created in Oracle Cloud Event Hub Service, a managed Kafka cloud service)

2. Node application that consumes from Twitter – and publishes to the Kafka topic

3. (Postman Collection to create) Elastic Search Index plus custom mapping (primarily to extract proper creation date time value from a date string) (in my demo, this Elastic Search Index is created in a Elastic Search instance running in a Docker Container on Oracle Container Cloud)

4. Node application that consumes the events from the Kafka tweets-topic and turns each event into a new record in the index. In this demo, the Node application is also running on Oracle Cloud (Application Container Cloud), but that does not have to be the case

5. Kibana dashboard on top of the Tweets Index. In my demo, Kibana is also running in a Docker container in Oracle Container Cloud

1. Kafka Tweets Topic on Oracle Event Hub Cloud Service

image

After completing the wizard, the topic is created an can be accessed by producers and consumers.

2. Node application that consumes from Twitter – and publishes to the Kafka topic

The Node application consists of an index.js file that handles HTTP Requests – for health checking – and consumes from Twitter and pulishes to a Kafka Topic. It uses a file twitterconfig.js (not included) that contains the secret details of a Twitter client. The contents of this file should look like this – and should contain your own Twitter Client Details:

// CHANGE THIS **************************************************************
// go to https://apps.twitter.com/ to register your app
var twitterconfig = {
    consumer_key: 'mykey',
    consumer_secret: 'mysecret',
    access_token_key: 'at-key',
    access_token_secret: 'at-secret'  
    };
    
    module.exports = {twitterconfig};

The index.js file requires the npm libraries kafka-node and twit as well as express and http for handling http requests.

The code can be said to be divided in three parts:

  • initialization, create HTTP server and handle HTTP requests
  • Consume from Twitter
  • Publish to Kafka

Here are the three code sections:

const express = require('express');
var http = require('http')
const app = express();
var PORT = process.env.PORT || 8144;
const server = http.createServer(app);
var APP_VERSION = "0.0.3"

const startTime = new Date()
const bodyParser = require('body-parser');
app.use(bodyParser.json());
var tweetCount = 0;
app.get('/about', function (req, res) {
  var about = {
    "about": "Twitter Consumer and Producer to " + TOPIC_NAME,
    "PORT": process.env.PORT,
    "APP_VERSION ": APP_VERSION,
    "Running Since": startTime,
    "Total number of tweets processed": tweetCount

  }
  res.json(about);
})
server.listen(PORT, function listening() {
  console.log('Listening on %d', server.address().port);
});

Code for consuming from Twitter – in this case for the hash tags #jeeconf,#java and #oraclecode:

var Twit = require('twit');
const { twitterconfig } = require('./twitterconfig');

var T = new Twit({
  consumer_key: twitterconfig.consumer_key,
  consumer_secret: twitterconfig.consumer_secret,
  access_token: twitterconfig.access_token_key,
  access_token_secret: twitterconfig.access_token_secret,
  timeout_ms: 60 * 1000,
});


var twiterHashTags = process.env.TWITTER_HASHTAGS || '#oraclecode,#java,#jeeconf';
var tracks = { track: twiterHashTags.split(',') };

let tweetStream = T.stream('statuses/filter', tracks)
tweetstream(tracks, tweetStream);

function tweetstream(hashtags, tweetStream) {
  console.log("Started tweet stream for hashtag #" + JSON.stringify(hashtags));

  tweetStream.on('connected', function (response) {
    console.log("Stream connected to twitter for #" + JSON.stringify(hashtags));
  })
  tweetStream.on('error', function (error) {
    console.log("Error in Stream for #" + JSON.stringify(hashtags) + " " + error);
  })
  tweetStream.on('tweet', function (tweet) {
    produceTweetEvent(tweet);
  });
}

Code for publishing to the Kafka Topic a516817-tweetstopic:

const kafka = require('kafka-node');
const APP_NAME = "TwitterConsumer"

var EVENT_HUB_PUBLIC_IP = process.env.KAFKA_HOST || '129.1.1.116';
var TOPIC_NAME = process.env.KAFKA_TOPIC || 'a516817-tweetstopic';

var Producer = kafka.Producer;
var client = new kafka.Client(EVENT_HUB_PUBLIC_IP);
var producer = new Producer(client);
KeyedMessage = kafka.KeyedMessage;

producer.on('ready', function () {
  console.log("Producer is ready in " + APP_NAME);
});
producer.on('error', function (err) {
  console.log("failed to create the client or the producer " + JSON.stringify(err));
})


let payloads = [
  { topic: TOPIC_NAME, messages: '*', partition: 0 }
];

function produceTweetEvent(tweet) {
  var hashtagFound = false;
  try {
    // find out which of the original hashtags { track: ['oraclecode', 'java', 'jeeconf'] } in the hashtags for this tweet; 
    //that is the one for the tagFilter property
    // select one other hashtag from tweet.entities.hashtags to set in property hashtag
    var tagFilter = "#jeeconf";
    var extraHashTag = "liveForCode";
    for (var i = 0; i < tweet.entities.hashtags.length; i++) {
      var tag = '#' + tweet.entities.hashtags[i].text.toLowerCase();
      console.log("inspect hashtag " + tag);
      var idx = tracks.track.indexOf(tag);
      if (idx > -1) {
        tagFilter = tag;
        hashtagFound = true;
      } else {
        extraHashTag = tag
      }
    }//for

    if (hashtagFound) {
      var tweetEvent = {
        "eventType": "tweetEvent"
        , "text": tweet.text
        , "isARetweet": tweet.retweeted_status ? "y" : "n"
        , "author": tweet.user.name
        , "hashtag": extraHashTag
        , "createdAt": tweet.created_at
        , "language": tweet.lang
        , "tweetId": tweet.id
        , "tagFilter": tagFilter
        , "originalTweetId": tweet.retweeted_status ? tweet.retweeted_status.id : null
      };
      eventPublisher.publishEvent(tweet.id, tweetEvent)
      tweetCount++
    }// if hashtag found
  } catch (e) {
    console.log("Exception in publishing Tweet Event " + JSON.stringify(e))
  }
}

var eventPublisher = module.exports;

eventPublisher.publishEvent = function (eventKey, event) {
  km = new KeyedMessage(eventKey, JSON.stringify(event));
  payloads = [
    { topic: TOPIC_NAME, messages: [km], partition: 0 }
  ];
  producer.send(payloads, function (err, data) {
    if (err) {
      console.error("Failed to publish event with key " + eventKey + " to topic " + TOPIC_NAME + " :" + JSON.stringify(err));
    }
    console.log("Published event with key " + eventKey + " to topic " + TOPIC_NAME + " :" + JSON.stringify(data));
  });
}//publishEvent

3. (Postman Collection to create) Elastic Search Index plus custom mapping

Preparation of an Elastic Search environment is done through REST API calls. These can be made from code or from the command line (using CURL) or from a tool such as Postman. In this case, I have created a Postman collection with a number of calls to prepare the Elastic Search index tweets.

image

The following requests are relevant:

  • Check if the Elastic Search server is healthy: GET {{ELASTIC_HOME}}:9200/_cat/health
  • Create the tweets index: PUT {{ELASTIC_HOME}}:9200/tweets
  • Create the mapping for the tweets index: PUT {{ELASTIC_HOME}}:9200/tweets/_mapping/doc

The body for the last request is relevant:

{
                "properties": {
                    "author": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "createdAt": {
                        "type": "date",
          "format": "EEE MMM dd HH:mm:ss ZZ yyyy"
  
                    },
                    "eventType": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "hashtag": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "isARetweet": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "language": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "tagFilter": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "text": {
                        "type": "text",
                        "fields": {
                            "keyword": {
                                "type": "keyword",
                                "ignore_above": 256
                            }
                        }
                    },
                    "tweetId": {
                        "type": "long"
                    }
                }
            }

The custom aspect of the mapping is primarily to extract proper creation date time value from a date string.

4. Node application that consumes the events from the Kafka tweets-topic and turns each event into a new record in the elastic search index

The tweetListener.js file contains the code for two main purposes: handle HTTP requests (primarily for health checks) and consume events from the Kafka Topic for tweets. This file requires the npm modules express, http and kafka-node for this. It also imports the local module model from the file model.js. This module writes Tweet records to the Elastic Search index. It uses the npm  module elasticsearch for this.

The code in tweetListener.js is best read in two sections:

First section for handling HTTP requests:

const express = require('express');
var https = require('https')
  , http = require('http')
const app = express();
var PORT = process.env.PORT || 8145;
const server = http.createServer(app);
var APP_VERSION = "0.0.3"


const bodyParser = require('body-parser');
app.use(bodyParser.json());
var tweetCount = 0;
app.get('/about', function (req, res) {
  var about = {
    "about": "Twitter Consumer from  " +SOURCE_TOPIC_NAME,
    "PORT": process.env.PORT,
    "APP_VERSION ": APP_VERSION,
    "Running Since": startTime,
    "Total number of tweets processed": tweetCount

  }
  res.json(about);
})
server.listen(PORT, function listening() {
  console.log('Listening on %d', server.address().port);
});

Second section for consuming Kafka events from tweets topic – and invoking the model module for each event:

var kafka = require('kafka-node');
var model = require("./model");

var tweetListener = module.exports;

var subscribers = [];
tweetListener.subscribeToTweets = function (callback) {
  subscribers.push(callback);
}

// var kafkaHost = process.env.KAFKA_HOST || "192.168.188.102";
// var zookeeperPort = process.env.ZOOKEEPER_PORT || 2181;
// var TOPIC_NAME = process.env.KAFKA_TOPIC ||'tweets-topic';

var KAFKA_ZK_SERVER_PORT = 2181;

var SOURCE_KAFKA_HOST = '129.1.1.116';
var SOURCE_TOPIC_NAME = 'a516817-tweetstopic';

var consumerOptions = {
    host: SOURCE_KAFKA_HOST + ':' + KAFKA_ZK_SERVER_PORT ,
  groupId: 'consume-tweets-for-elastic-index',
  sessionTimeout: 15000,
  protocol: ['roundrobin'],
  fromOffset: 'latest' // equivalent of auto.offset.reset valid values are 'none', 'latest', 'earliest'
};

var topics = [SOURCE_TOPIC_NAME];
var consumerGroup = new kafka.ConsumerGroup(Object.assign({ id: 'consumer1' }, consumerOptions), topics);
consumerGroup.on('error', onError);
consumerGroup.on('message', onMessage);

function onMessage(message) {
  console.log('%s read msg Topic="%s" Partition=%s Offset=%d', this.client.clientId, message.topic, message.partition, message.offset);
  console.log("Message Value " + message.value)

  subscribers.forEach((subscriber) => {
    subscriber(message.value);

  })
}

function onError(error) {
  console.error(error);
  console.error(error.stack);
}

process.once('SIGINT', function () {
  async.each([consumerGroup], function (consumer, callback) {
    consumer.close(true, callback);
  });
});


tweetListener.subscribeToTweets((message) => {
  var tweetEvent = JSON.parse(message);
  tweetCount++; 
  // ready to elastify tweetEvent
  console.log("Ready to put on Elastic "+JSON.stringify(tweetEvent));
  model.saveTweet(tweetEvent).then((result, error) => {
    console.log("Saved to Elastic "+JSON.stringify(result)+'Error?'+JSON.stringify(error));
})

})

The file model.js connects to the Elastic Search server and saves tweets to the tweets index when so requested. Very straightforward. Without any exception handling, for example in case the Elastic Search server does not accept a record or is simply unavailable. Remember: this is just the code for a demo.

var tweetsModel = module.exports;
var elasticsearch = require('elasticsearch');

var ELASTIC_SEARCH_HOST = process.env.ELASTIC_CONNECTOR || 'http://129.150.114.134:9200';

var client = new elasticsearch.Client({
    host: ELASTIC_SEARCH_HOST,
});

client.ping({
    requestTimeout: 30000,
}, function (error) {
    if (error) {
        console.error('elasticsearch cluster is down!');
    } else {
        console.log('Connection to Elastic Search is established');
    }
});

tweetsModel.saveTweet = async function (tweet) {
    try {
        var response = await client.index({
            index: 'tweets',
            id: tweet.tweetId,
            type: 'doc',
            body: tweet
        }
        );

        console.log("Response: " + JSON.stringify(response));
        return tweet;
    }
    catch (e) {
        console.error("Error in Elastic Search - index document " + tweet.tweetId + ":" + JSON.stringify(e))
    }

}

5. Kibana dashboard on top of the Tweets Index.

Kibana is an out of the box application, preconfigured in my case for the colocated Elastic Search server. Once I provide the name of the index – TWEETS – I am interested in, immediately Kibana shows an overview of (selected time ranges in) this index (the peaks in the screenshot indicate May 19th and 20th when the JEEConf was taking place in Kyiv, where I presented this demo:

image

The same results in the Twitter UI:

image

The post Simple CQRS – Tweets to Apache Kafka to Elastic Search Index using a little Node code appeared first on AMIS Oracle and Java Blog.

Oracle Service Bus 12.2.1.1.0: Service Exploring via WebLogic Server MBeans with JMX

$
0
0

In a previous article I talked about an OSBServiceExplorer tool to explore the services (proxy and business) within the OSB via WebLogic Server MBeans with JMX. The code mentioned in that article was based on Oracle Service Bus 11.1.1.7 (11g).

In the meantime the OSB world has changed (for example now we can use pipelines) and it was time for me to pick up the old code and get it working within Oracle Service Bus 12.2.1.1.0 (12c).

This article will explain how the OSBServiceExplorer tool uses WebLogic Server MBeans with JMX in an 12c environment.

Unfortunately, getting the java code to work in 12c wasn’t as straightforward as I hoped.

For more details on the OSB, WebLogic Server MBeans and JMX subject, I kindly refer you to my previous article. In this article I will refer to it as my previous MBeans 11g article.
[https://technology.amis.nl/2017/03/09/oracle-service-bus-service-exploring-via-weblogic-server-mbeans-with-jmx/]

Before using the OSBServiceExplorer tool in an 12c environment, I first created two OSB Projects (MusicService and TrackService) with pipelines, proxy and business services. I used Oracle JDeveloper 12c (12.2.1.1.0) for this (from within a VirtualBox appliance).

For the latest version of Oracle Service Bus see:
http://www.oracle.com/technetwork/middleware/service-bus/downloads/index.html

If you want to use a VirtualBox appliance, have a look at for example: Pre-built Virtual Machine for SOA Suite 12.2.1.3.0
[http://www.oracle.com/technetwork/middleware/soasuite/learnmore/vmsoa122130-4122735.html]

After deploying the OSB Projects that were created in JDeveloper, to the WebLogic server, the Oracle Service Bus Console 12c (in my case: http://localhost:7101/servicebus) looks like:

Before we dive into the OSBServiceExplorer tool , first I give you some detail information of the “TrackService” (from JDeveloper), that will be used as an example in this article.

The “TrackService” sboverview looks like:

As you can see, several proxy services, a pipeline and a business service are present.

The Message Flow of pipeline “TrackServicePipeline” looks like:

The OSB Project structure of service “TrackService” looks like:

Runtime information (name and state) of the server instances

The OSBServiceExplorer tool writes its output to a text file called “OSBServiceExplorer.txt”.

First the runtime information (name and state) of the server instances (Administration Server and Managed Servers) of the WebLogic domain are written to file.

Example content fragment of the text file:

Found server runtimes:
– Server name: DefaultServer. Server state: RUNNING

For more info and the responsible code fragment see my previous MBeans 11g article.

List of Ref objects (projects, folders, or resources)

Next, a list of Ref objects is written to file, including the total number of objects in the list.

Example content fragment of the text file:

Found total of 45 refs, including the following pipelines, proxy and business services:
– ProxyService: TrackService/proxy/TrackServiceRest
– BusinessService: MusicService/business/db_InsertCD
– BusinessService: TrackService/business/CDService
– Pipeline: TrackService/pipeline/TrackServicePipeline
– ProxyService: TrackService/proxy/TrackService
– Pipeline: MusicService/pipeline/MusicServicePipeline
– ProxyService: MusicService/proxy/MusicService
– ProxyService: TrackService/proxy/TrackServiceRestJSON

See the code fragment below (I highlighted the changes I made on the code from the 11g version):

Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);

fileWriter.write("Found total of " + refs.size() +
                 " refs, including the following pipelines, proxy and business services:\n");

for (Ref ref : refs) {
    String typeId = ref.getTypeId();

    if (typeId.equalsIgnoreCase("ProxyService")) {
        fileWriter.write("- ProxyService: " + ref.getFullName() +
                         "\n");
    } else if (typeId.equalsIgnoreCase("Pipeline")) {
        fileWriter.write("- Pipeline: " +
                         ref.getFullName() + "\n");                    
    } else if (typeId.equalsIgnoreCase("BusinessService")) {
        fileWriter.write("- BusinessService: " +
                         ref.getFullName() + "\n");
    } else {
        //fileWriter.write(ref.getFullName());
    }
}

fileWriter.write("" + "\n");

For more info see my previous MBeans 11g article.

ResourceConfigurationMBean

In the Oracle Enterprise Manager FMC 12c (in my case: http://localhost:7101/em) I navigated to SOA / service-bus and opened the System MBean Browser:

Here the ResourceConfigurationMBean’s can be found under com.oracle.osb.


[Via MBean Browser]

If we navigate to a particular ResourceConfigurationMBean for a proxy service (for example …$proxy$TrackService), the information on the right is as follows :


[Via MBean Browser]

As in the 11g version the attributes Configuration, Metadata and Name are available.

If we navigate to a particular ResourceConfigurationMBean for a pipeline (for example …$pipeline$TrackServicePipeline), the information on the right is as follows :


[Via MBean Browser]

As you can see the value for attribute “Configuration” for this pipeline is “Unavailable”.

Remember the following java code in OSBServiceExplorer.java (see my previous MBeans 11g article):

for (ObjectName osbResourceConfiguration :
    osbResourceConfigurations) {
 
    CompositeDataSupport configuration =
        (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                      "Configuration");

So now apparently, getting the configuration can result in a NullPointerException. This has to be dealt with in the new 12c version of OSBServiceExplorer.java, besides the fact that now also a pipeline is a new resource type.

But of course for our OSB service explorer we are in particular, interested in the elements (nodes) of the pipeline. In order to get this information available in the System MBean Browser, something has to be done.

Via the Oracle Enterprise Manager FMC 12c I navigated to SOA / service-bus / Home / Projects / TrackService and clicked on tab “Operations”:

Here you can see the Operations settings of this particular service.

Next I clicked on the pipeline “TrackServicePipeline”, where I enabled “Monitoring”

If we then navigate back to the ResourceConfigurationMBean for pipeline “TrackServicePipeline”, the information on the right is as follows:


[Via MBean Browser]

So now the wanted configuration information is available.

Remark:
For the pipeline “MusicServicePipeline” the monitoring is still disabled, so the configuration is still unavailabe.

Diving into attribute Configuration of the ResourceConfigurationMBean

For each found pipeline, proxy and business service the configuration information (canonicalName, service-type, transport-type, url) is written to file.

Proxy service configuration:
Please see my previous MBeans 11g article.

Business service configuration:
Please see my previous MBeans 11g article.

Pipeline configuration:
Below is an example of a pipeline configuration (content fragment of the text file):

Configuration of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean: service-type=SOAP

If the pipeline configuration is unavailable, the following is shown:

Resource is a Pipeline (without available Configuration)

The pipelines can be recognized by the Pipeline$ prefix.

Pipeline, element hierarchy

In the 11g version of OSBServiceExplorer.java, for a proxy service the elements (nodes) of the pipeline were investigated.

See the code fragment below:

CompositeDataSupport pipeline =
    (CompositeDataSupport)configuration.get("pipeline");
TabularDataSupport nodes =
    (TabularDataSupport)pipeline.get("nodes");

In 12c however this doesn’t work for a proxy service. The same code can be used however for a pipeline.

For pipeline “TrackServicePipeline”, the configuration (including nodes) looks like:


[Via MBean Browser]

Based on the nodes information (with node-id) in the MBean Browser and the content of pipeline “TrackServicePipeline.pipeline” the following structure can be put together:

The mapping between the node-id and the corresponding element in the Messsage Flow can be achieved by looking in the .pipeline file for the _ActiondId- identification, mentioned as value for the name key.

Example of the details of node with node-id = 4 and name = _ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc:


[Via MBean Browser]

Content of pipeline “TrackServicePipeline.pipeline”:

<?xml version="1.0" encoding="UTF-8"?>
<con:pipelineEntry xmlns:con="http://www.bea.com/wli/sb/pipeline/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:con1="http://www.bea.com/wli/sb/stages/config" xmlns:con2="http://www.bea.com/wli/sb/stages/routing/config" xmlns:con3="http://www.bea.com/wli/sb/stages/transform/config">
    <con:coreEntry>
        <con:binding type="SOAP" isSoap12="false" xsi:type="con:SoapBindingType">
            <con:wsdl ref="TrackService/proxy/TrackService"/>
            <con:binding>
                <con:name>TrackServiceBinding</con:name>
                <con:namespace>http://trackservice.services.soatraining.amis/</con:namespace>
            </con:binding>
        </con:binding>
        <con:xqConfiguration>
            <con:snippetVersion>1.0</con:snippetVersion>
        </con:xqConfiguration>
    </con:coreEntry>
    <con:router>
        <con:flow>
            <con:route-node name="RouteNode1">
                <con:context>
                    <con1:userNsDecl prefix="trac" namespace="http://trackservice.services.soatraining.amis/"/>
                </con:context>
                <con:actions>
                    <con2:route>
                        <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc</con1:id>
                        <con2:service ref="TrackService/business/CDService" xsi:type="ref:BusinessServiceRef" xmlns:ref="http://www.bea.com/wli/sb/reference"/>
                        <con2:operation>getTracksForCD</con2:operation>
                        <con2:outboundTransform>
                            <con3:replace varName="body" contents-only="true">
                                <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ff9</con1:id>
                                <con3:location>
                                    <con1:xpathText>.</con1:xpathText>
                                </con3:location>
                                <con3:expr>
                                    <con1:xqueryTransform>
                                        <con1:resource ref="TrackService/Resources/xquery/CDService_getTracksForCDRequest"/>
                                        <con1:param name="getTracksForCDRequest">
                                            <con1:path>$body/trac:getTracksForCDRequest</con1:path>
                                        </con1:param>
                                    </con1:xqueryTransform>
                                </con3:expr>
                            </con3:replace>
                        </con2:outboundTransform>
                        <con2:responseTransform>
                            <con3:replace varName="body" contents-only="true">
                                <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ff6</con1:id>
                                <con3:location>
                                    <con1:xpathText>.</con1:xpathText>
                                </con3:location>
                                <con3:expr>
                                    <con1:xqueryTransform>
                                        <con1:resource ref="TrackService/Resources/xquery/CDService_getTracksForCDResponse"/>
                                        <con1:param name="getTracksForCDResponse">
                                            <con1:path>$body/*[1]</con1:path>
                                        </con1:param>
                                    </con1:xqueryTransform>
                                </con3:expr>
                            </con3:replace>
                        </con2:responseTransform>
                    </con2:route>
                </con:actions>
            </con:route-node>
        </con:flow>
    </con:router>
</con:pipelineEntry>

It’s obvious that the nodes in the pipeline form a hierarchy. A node can have children, which in turn can also have children, etc. Because of the interest in only certain kind of nodes (Route, Java Callout, Service Callout, etc.) some kind of filtering is needed. For more info about this, see my previous MBeans 11g article.

Diving into attribute Metadata of the ResourceConfigurationMBean

For each found pipeline the metadata information (dependencies and dependents) is written to file.

Example content fragment of the text file:

Metadata of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
dependencies:
– BusinessService$TrackService$business$CDService
– WSDL$TrackService$proxy$TrackService

dependents:
– ProxyService$TrackService$proxy$TrackService
– ProxyService$TrackService$proxy$TrackServiceRest
– ProxyService$TrackService$proxy$TrackServiceRestJSON

As can be seen in the MBean Browser, the metadata for a particular pipeline shows the dependencies on other resources (like business services and WSDLs) and other services that are dependent on the pipeline.

For more info and the responsible code fragment see my previous MBeans 11g article.

Remark:
In the java code, dependencies on Xquery’s are filtered out and not written to the text file.

MBeans with regard to version 11.1.1.7

In the sample java code shown at the end of my previous MBeans 11g article, the use of the following MBeans can be seen:

MBean and other classes Jar file
weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar
weblogic.management.runtime.ServerRuntimeMBean.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar
com.bea.wli.sb.management.configuration.ALSBConfigurationMBean.class <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-api.jar
com.bea.wli.config.Ref.class <Middleware Home Directory>/Oracle_OSB1/modules/com.bea.common.configfwk_1.7.0.0.jar
weblogic.management.jmx.MBeanServerInvocationHandler.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar
com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-impl.jar

Therefor in JDeveloper 11g, the following Project Libraries and Classpath settings were made:

Description Class Path
Com.bea.common.configfwk_1.6.0.0.jar /oracle/fmwhome/Oracle_OSB1/modules/com.bea.common.configfwk_1.6.0.0.jar
Sb-kernel-api.jar /oracle/fmwhome/Oracle_OSB1/lib/sb-kernel-api.jar
Sb-kernel-impl.jar /oracle/fmwhome/Oracle_OSB1/lib/sb-kernel-impl.jar
Wlfullclient.jar /oracle/fmwhome/wlserver_10.3/server/lib/wlfullclient.jar

For more info about these MBeans, see my previous MBeans 11g article.

In order to connect to a WebLogic MBean Server in my previous MBeans 11g article I used the thick client wlfullclient.jar.

This library is not by default provided in a WebLogic install and must be build. The simple way of how to do this is described in “Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server, Using the WebLogic JarBuilder Tool”, which can be reached via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13717/jarbuilder.htm#SACLT240.

So I build wlfullclient.jar as follow:

cd <Middleware Home Directory>/wlserver_10.3/server/lib
java -jar wljarbuilder.jar

In the sample java code shown at the end of this article, the use of the same MBeans can be seen. However in JDeveloper 12c, changes in Project Libraries and Classpath settings were necessary, due to changes in the jar files used in the 12c environment. Also the wlfullclient.jar is deprecated as of WebLogic Server 12.1.3 !

Overview of WebLogic Client jar files

WebLogic Client Jar file Protocol
WebLogic Full Client weblogic.jar (6 KB)
(Via the manifest file MANIFEST.MF, classes in other JAR files are referenced)
T3
wlfullclient.jar (111.131 KB)
is deprecated as of WebLogic Server 12.1.3
T3
WebLogic Thin Client wlclient.jar (2.128 KB) IIOP
wljmxclient.jar (238 KB) IIOP
WebLogic Thin T3 Client wlthint3client.jar (7.287 KB) T3

Remark with regard to version 12.2.1:

Due to changes in the JDK, WLS no longer supports JMX with just the wlclient.jar. To use JMX, you must use either the ”full client” (weblogic.jar) or wljmxclient.jar.
[https://docs.oracle.com/middleware/1221/wls/JMXCU/accesswls.htm#JMXCU144]

WebLogic Full Client

The WebLogic full client, wlfullclient.jar, is deprecated as of WebLogic Server 12.1.3 and may be removed in a future release. Oracle recommends using the WebLogic Thin T3 client or other appropriate client depending on your environment.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT130]

For WebLogic Server 10.0 and later releases, client applications need to use the wlfullclient.jar file instead of the weblogic.jar. A WebLogic full client is a Java RMI client that uses Oracle’s proprietary T3 protocol to communicate with WebLogic Server, thereby leveraging the Java-to-Java model of distributed computing.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT376]

Not all functionality available with weblogic.jar is available with the wlfullclient.jar. For example, wlfullclient.jar does not support Web Services, which requires the wseeclient.jar. Nor does wlfullclient.jar support operations necessary for development purposes, such as ejbc, or support administrative operations, such as deployment, which still require using the weblogic.jar.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT376]

WebLogic Thin Client

In order to connect to a WebLogic MBean Server, it is also possible to use a thin client wljmxclient.jar (in combination with wlclient.jar). This JAR contains Oracle’s implementation of the HTTP and IIOP protocols.

Remark:
wlclient.jar is included in wljmxclient.jar‘s MANIFEST ClassPath entry, so wlclient.jar and wljmxclient.jar need to be in the same directory, or both jars need to be specified on the classpath.

Ensure that weblogic.jar or wlfullclient.jar is not included in the classpath if wljmxclient.jar is included. Only the thin client wljmxclient.jar/wlclient.jar or the thick client wlfullclient.jar should be used, but not a combination of both. [https://docs.oracle.com/middleware/1221/wls/JMXCU/accesswls.htm#JMXCU144]

WebLogic Thin T3 Client

The WebLogic Thin T3 Client jar (wlthint3client.jar) is a light-weight, high performing alternative to the wlfullclient.jar and wlclient.jar (IIOP) remote client jars. The Thin T3 client has a minimal footprint while providing access to a rich set of APIs that are appropriate for client usage. As its name implies, the Thin T3 Client uses the WebLogic T3 protocol, which provides significant performance improvements over the wlclient.jar, which uses the IIOP protocol.

The Thin T3 Client is the recommended option for most remote client use cases. There are some limitations in the Thin t3 client as outlined below. For those few use cases, you may need to use the full client or the IIOP thin client.

Limitations and Considerations:

This release does not support the following:

  • Mbean-based utilities (such as JMS Helper, JMS Module Helper), and JMS multicast are not supported. You can use JMX calls as an alternative to “mbean-based helpers.”
  • JDBC resources, including WebLogic JDBC extensions.
  • Running a WebLogic RMI server in the client.

The Thin T3 client uses JDK classes to connect to the host, including when connecting to dual-stacked machines. If multiple addresses available on the host, the connection may attempt to go to the wrong address and fail if the host is not properly configured.
[https://docs.oracle.com/middleware/12212/wls/SACLT/wlthint3client.htm#SACLT387]

MBeans with regard to version 12.2.1

As I mentioned earlier in this article, in order to get the Java code working in a 12.2.1 environment, I had to make some changes.

MBean and other classes Jar file
weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean.class <Middleware Home Directory>/ wlserver/server/lib/wlfullclient.jar
weblogic.management.runtime.ServerRuntimeMBean.class <Middleware Home Directory>/ wlserver/server/lib/wlfullclient.jar
com.bea.wli.sb.management.configuration.ALSBConfigurationMBean.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.kernel-api.jar
com.bea.wli.config.Ref.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.configfwk.jar
weblogic.management.jmx.MBeanServerInvocationHandler.class <Middleware Home Directory>/wlserver/modules/com.bea.core.management.jmx.jar
com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.kernel-wls.jar

In JDeveloper 12c, the following Project Libraries and Classpath settings were made (at first):

Description Class Path
Com.bea.core.management.jmx.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.bea.core.management.jmx.jar
Oracle.servicebus.configfwk.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.configfwk.jar
Oracle.servicebus.kernel-api.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-api.jar
Oracle.servicebus.kernel-wls.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-wls.jar
Wlfullclient.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wlfullclient.jar

Using wlfullclient.jar:
At first I still used the thick client wlfullclient.jar (despite the fact that it’s deprecated), which I build as follow:

cd <Middleware Home Directory>/wlserver/server/lib
java -jar wljarbuilder.jar
Creating new jar file: wlfullclient.jar

wlfullclient.jar and jarbuilder are deprecated starting from the WebLogic 12.1.3 release.
Please use one of the equivalent stand-alone clients instead. Consult Oracle WebLogic public documents for details.

Compiling and running the OSBServiceExplorer tool in JDeveloper worked.

Using weblogic.jar:
When I changed wlfullclient.jar in to weblogic.jar the OSBServiceExplorer tool also worked.

Using wlclient.jar:
When I changed wlfullclient.jar in to wlclient.jar the OSBServiceExplorer tool did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Using wlclient.jar and wljmxclient.jar:
Also adding wljmxclient.jar did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Adding wls-api.jar:
So in order to try resolving the errors shown above, I also added wls-api.jar. But then I got an error on:

String name = serverRuntimeMBean.getName();

I then decided to go for the, by Oracle recommended, WebLogic Thin T3 client wlthint3client.jar.

Using wlthint3client.jar:
When I changed wlfullclient.jar in to wlthint3client.jar the OSBServiceExplorer tool did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Using wlthint3client.jar and wls-api.jar:
So in order to try resolving the errors shown above, I also added wls-api.jar. But then again I got an error on:

String name = serverRuntimeMBean.getName();

However I could run the OSBServiceExplorer tool in JDeveloper , but then I got the error:

Error(160,49): cannot access weblogic.security.ServerRuntimeSecurityAccess; class file for weblogic.security.ServerRuntimeSecurityAccess not found

I found that the following jar files could solve this error:

For the time being I extracted the needed class file (weblogic.security.ServerRuntimeSecurityAccess.class) from the smallest size jar file to a lib directory on the filesystem and in JDeveloper added that lib directory as a Classpath to the Project.

As it turned out I had to repeat these steps for the following errors I still got after I extended the Classpath:

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/utils/collections/WeakConcurrentHashMap

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/management/runtime/TimeServiceRuntimeMBean

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/management/partition/admin/ResourceGroupLifecycleOperations$RGState

After that, compiling and running the OSBServiceExplorer tool in JDeveloper worked.

Using the lib directory with the extracted class files, was not what I wanted. Adding the jar files mentioned above seemed a better idea. So I picked the jar files with the smallest size, to get the job done, and discarded the lib directory.

So in the end, in JDeveloper 12c, the following Project Libraries and Classpath settings were made:

Description Class Path
Com.bea.core.management.jmx.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.bea.core.management.jmx.jar
Com.oracle.weblogic.management.base.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.oracle.weblogic.management.base.jar
Com.oracle.weblogic.security.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.oracle.weblogic.security.jar
Com.oracle.webservices.wls.jaxrpc-client.jar /u01/app/oracle/fmw/12.2/wlserver/modules/clients/com.oracle.webservices.wls.jaxrpc-client.jar
Oracle.servicebus.configfwk.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.configfwk.jar
Oracle.servicebus.kernel-api.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-api.jar
Oracle.servicebus.kernel-wls.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-wls.jar
Wlthint3client.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wlthint3client.jar
Wls-api.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wls-api.jar

Shell script

For ease of use, a shell script file was created, using MBeans, to explore pipeline, proxy and business services. The WebLogic Server contains a set of MBeans that can be used to configure, monitor and manage WebLogic Server resources.

The content of the shell script file “OSBServiceExplorer” is:

#!/bin/bash

# Script to call OSBServiceExplorer

echo “Start calling OSBServiceExplorer”

java -classpath “OSBServiceExplorer.jar:oracle.servicebus.configfwk.jar:com.bea.core.management.jmx.jar:oracle.servicebus.kernel-api.jar:oracle.servicebus.kernel-wls.jar:wlthint3client.jar:wls-api.jar:com.oracle.weblogic.security.jar:com.oracle.webservices.wls.jaxrpc-client.jar:com.oracle.weblogic.management.base.jar” nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer “xyz” “7001” “weblogic” “xyz”

echo “End calling OSBServiceExplorer”

In the shell script file via the java executable, a class named OSBServiceExplorer is being called. The main method of this class expects the following parameters:

Parameter name Description
HOSTNAME Host name of the AdminServer
PORT Port of the AdminServer
USERNAME Username
PASSWORD Passsword

Example content of the text file:

Found server runtimes:
- Server name: DefaultServer. Server state: RUNNING

Found total of 45 refs, including the following pipelines, proxy and business services:
- ProxyService: TrackService/proxy/TrackServiceRest
- BusinessService: MusicService/business/db_InsertCD
- BusinessService: TrackService/business/CDService
- Pipeline: TrackService/pipeline/TrackServicePipeline
- ProxyService: TrackService/proxy/TrackService
- Pipeline: MusicService/pipeline/MusicServicePipeline
- ProxyService: MusicService/proxy/MusicService
- ProxyService: TrackService/proxy/TrackServiceRestJSON

ResourceConfiguration list of pipelines, proxy and business services:
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$MusicService$proxy$MusicService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$MusicService$proxy$MusicService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=/music/MusicService
- Resource: com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean: service-type=SOAP

    Index#4:
       level    = 1
       label    = route
       name     = _ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc
       node-id  = 4
       type     = Action
       children = [1,3]
    Index#6:
       level    = 1
       label    = route-node
       name     = RouteNode1
       node-id  = 6
       type     = RouteNode
       children = [5]

  Metadata of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$TrackService$business$CDService
      - WSDL$TrackService$proxy$TrackService

    dependents:
      - ProxyService$TrackService$proxy$TrackService
      - ProxyService$TrackService$proxy$TrackServiceRest
      - ProxyService$TrackService$proxy$TrackServiceRestJSON

- Resource: com.oracle.osb:Location=DefaultServer,Name=Operations$System$Operator Settings$GlobalOperationalSettings,Type=ResourceConfigurationMBean
- Resource: com.oracle.osb:Location=DefaultServer,Name=Pipeline$MusicService$pipeline$MusicServicePipeline,Type=ResourceConfigurationMBean
  Resource is a Pipeline (without available Configuration)
- Resource: com.oracle.osb:Location=DefaultServer,Name=BusinessService$MusicService$business$db_InsertCD,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=BusinessService$MusicService$business$db_InsertCD,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=jca, url=jca://eis/DB/MUSIC
- Resource: com.oracle.osb:Location=DefaultServer,Name=BusinessService$TrackService$business$CDService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=BusinessService$TrackService$business$CDService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=http://127.0.0.1:7101/cd_services/CDService
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRest,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRest,Type=ResourceConfigurationMBean: service-type=REST, transport-type=http, url=/music/TrackServiceRest
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=/music/TrackService
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRestJSON,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRestJSON,Type=ResourceConfigurationMBean: service-type=REST, transport-type=http, url=/music/TrackServiceRestJSON

The java code:

package nl.xyz.osbservice.osbserviceexplorer;


import com.bea.wli.config.Ref;
import com.bea.wli.sb.management.configuration.ALSBConfigurationMBean;

import java.io.FileWriter;
import java.io.IOException;

import java.net.MalformedURLException;

import java.util.Collection;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.Properties;
import java.util.Set;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeData;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;

import weblogic.management.jmx.MBeanServerInvocationHandler;
import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;


public class OSBServiceExplorer {
    private static MBeanServerConnection connection;
    private static JMXConnector connector;
    private static FileWriter fileWriter;

    /**
     * Indent a string
     * @param indent - The number of indentations to add before a string 
     * @return String - The indented string
     */
    private static String getIndentString(int indent) {
        StringBuilder sb = new StringBuilder();
        for (int i = 0; i < indent; i++) {
            sb.append("  ");
        }
        return sb.toString();
    }


    /**
     * Print composite data (write to file)
     * @param nodes - The list of nodes
     * @param key - The list of keys
     * @param level - The level in the hierarchy of nodes
     */
    private void printCompositeData(TabularDataSupport nodes, Object[] key,
                                    int level) {
        try {
            CompositeData compositeData = nodes.get(key);

            fileWriter.write(getIndentString(level) + "     level    = " +
                             level + "\n");

            String label = (String)compositeData.get("label");
            String name = (String)compositeData.get("name");
            String nodeid = (String)compositeData.get("node-id");
            String type = (String)compositeData.get("type");
            String[] childeren = (String[])compositeData.get("children");
            if (level == 1 ||
                (label.contains("route-node") || label.contains("route"))) {
                fileWriter.write(getIndentString(level) + "     label    = " +
                                 label + "\n");

                fileWriter.write(getIndentString(level) + "     name     = " +
                                 name + "\n");

                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     type     = " +
                                 type + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) {
                        fileWriter.write(",");
                    }
                }
                fileWriter.write("]\n");
            } else if (level >= 2) {
                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) {
                        fileWriter.write(",");
                    }
                }
                fileWriter.write("]\n");
            }

            if ((level == 1 && type.equals("OperationalBranchNode")) ||
                level > 1) {
                level++;

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    key[0] = childeren[i];
                    printCompositeData(nodes, key, level);
                }
            }

        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }

    public OSBServiceExplorer(HashMap props) {
        super();


        try {

            Properties properties = new Properties();
            properties.putAll(props);

            initConnection(properties.getProperty("HOSTNAME"),
                           properties.getProperty("PORT"),
                           properties.getProperty("USERNAME"),
                           properties.getProperty("PASSWORD"));


            DomainRuntimeServiceMBean domainRuntimeServiceMBean =
                (DomainRuntimeServiceMBean)findDomainRuntimeServiceMBean(connection);

            ServerRuntimeMBean[] serverRuntimes =
                domainRuntimeServiceMBean.getServerRuntimes();

            fileWriter = new FileWriter("OSBServiceExplorer.txt", false);


            fileWriter.write("Found server runtimes:\n");
            int length = (int)serverRuntimes.length;
            for (int i = 0; i < length; i++) {
                ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];
                
                String name = serverRuntimeMBean.getName();
                String state = serverRuntimeMBean.getState();
                fileWriter.write("- Server name: " + name +
                                 ". Server state: " + state + "\n");
            }
            fileWriter.write("" + "\n");

            // Create an mbean instance to perform configuration operations in the created session.
            //
            // There is a separate instance of ALSBConfigurationMBean for each session.
            // There is also one more ALSBConfigurationMBean instance which works on the core data, i.e., the data which ALSB runtime uses.
            // An ALSBConfigurationMBean instance is created whenever a new session is created via the SessionManagementMBean.createSession(String) API.
            // This mbean instance is then used to perform configuration operations in that session.
            // The mbean instance is destroyed when the corresponding session is activated or discarded.
            ALSBConfigurationMBean alsbConfigurationMBean =
                (ALSBConfigurationMBean)domainRuntimeServiceMBean.findService(ALSBConfigurationMBean.NAME,
                                                                              ALSBConfigurationMBean.TYPE,
                                                                              null);            

            Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);

            fileWriter.write("Found total of " + refs.size() +
                             " refs, including the following pipelines, proxy and business services:\n");

            for (Ref ref : refs) {
                String typeId = ref.getTypeId();

                if (typeId.equalsIgnoreCase("ProxyService")) {
                    fileWriter.write("- ProxyService: " + ref.getFullName() +
                                     "\n");
                } else if (typeId.equalsIgnoreCase("Pipeline")) {
                    fileWriter.write("- Pipeline: " +
                                     ref.getFullName() + "\n");                    
                } else if (typeId.equalsIgnoreCase("BusinessService")) {
                    fileWriter.write("- BusinessService: " +
                                     ref.getFullName() + "\n");
                } else {
                    //fileWriter.write(ref.getFullName());
                }
            }

            fileWriter.write("" + "\n");

            String domain = "com.oracle.osb";
            String objectNamePattern =
                domain + ":" + "Type=ResourceConfigurationMBean,*";

            Set<ObjectName> osbResourceConfigurations =
                connection.queryNames(new ObjectName(objectNamePattern), null);
            
            fileWriter.write("ResourceConfiguration list of pipelines, proxy and business services:\n");
            for (ObjectName osbResourceConfiguration :
                 osbResourceConfigurations) {

                String canonicalName =
                    osbResourceConfiguration.getCanonicalName();
                fileWriter.write("- Resource: " + canonicalName + "\n");
                              
                try {
                    CompositeDataSupport configuration =
                        (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                      "Configuration");
                      
                    if (canonicalName.contains("ProxyService")) {
                        String servicetype =
                            (String)configuration.get("service-type");
                        CompositeDataSupport transportconfiguration =
                            (CompositeDataSupport)configuration.get("transport-configuration");
                        String transporttype =
                            (String)transportconfiguration.get("transport-type");
                        String url = (String)transportconfiguration.get("url");
                        
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype +
                                         ", transport-type=" + transporttype +
                                         ", url=" + url + "\n");
                    } else if (canonicalName.contains("BusinessService")) {
                        String servicetype =
                            (String)configuration.get("service-type");
                        CompositeDataSupport transportconfiguration =
                            (CompositeDataSupport)configuration.get("transport-configuration");
                        String transporttype =
                            (String)transportconfiguration.get("transport-type");
                        CompositeData[] urlconfiguration =
                            (CompositeData[])transportconfiguration.get("url-configuration");
                        String url = (String)urlconfiguration[0].get("url");
    
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype +
                                         ", transport-type=" + transporttype +
                                         ", url=" + url + "\n");
                    } else if (canonicalName.contains("Pipeline")) {
                        String servicetype =
                            (String)configuration.get("service-type");
    
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype + "\n");
                    }
                    
                    if (canonicalName.contains("Pipeline")) {
                        fileWriter.write("" + "\n");
    
                        CompositeDataSupport pipeline =
                            (CompositeDataSupport)configuration.get("pipeline");
                        TabularDataSupport nodes =
                            (TabularDataSupport)pipeline.get("nodes");
    
                        TabularType tabularType = nodes.getTabularType();
                        CompositeType rowType = tabularType.getRowType();
    
                        Iterator keyIter = nodes.keySet().iterator();
    
                        for (int j = 0; keyIter.hasNext(); ++j) {
    
                            Object[] key = ((Collection)keyIter.next()).toArray();
    
                            CompositeData compositeData = nodes.get(key);
    
                            String label = (String)compositeData.get("label");
                            String type = (String)compositeData.get("type");
                            if (type.equals("Action") &&
                                (label.contains("wsCallout") ||
                                 label.contains("javaCallout") ||
                                 label.contains("route"))) {
    
                                fileWriter.write("    Index#" + j + ":\n");
                                printCompositeData(nodes, key, 1);
                            } else if (type.equals("OperationalBranchNode") ||
                                       type.equals("RouteNode")) {
    
                                fileWriter.write("    Index#" + j + ":\n");
                                printCompositeData(nodes, key, 1);
                            }
                        }
                        
                        fileWriter.write("" + "\n");
                        
                        CompositeDataSupport metadata =
                            (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                          "Metadata");
                        
                        fileWriter.write("  Metadata of " + canonicalName + "\n");
    
                        String[] dependencies =
                            (String[])metadata.get("dependencies");
                        fileWriter.write("    dependencies:\n");
                        int size;
                        size = dependencies.length;
                        for (int i = 0; i < size; i++) {
                            String dependency = dependencies[i];
                            if (!dependency.contains("Xquery")) {
                                fileWriter.write("      - " + dependency + "\n");
                            }
                        }
                        fileWriter.write("" + "\n");
    
                        String[] dependents = (String[])metadata.get("dependents");
                        fileWriter.write("    dependents:\n");
                        size = dependents.length;
                        for (int i = 0; i < size; i++) {
                            String dependent = dependents[i];
                            fileWriter.write("      - " + dependent + "\n");
                        }
                        fileWriter.write("" + "\n");                
                    }
                }
                catch(Exception e) {
                    if (canonicalName.contains("Pipeline")) {
                      fileWriter.write("  Resource is a Pipeline (without available Configuration)" + "\n");
                    } else {
                      e.printStackTrace();
                    }
                }
            }
            fileWriter.close();

            System.out.println("Succesfully completed");

        } catch (Exception ex) {
            ex.printStackTrace();
        } finally {
            if (connector != null)
                try {
                    connector.close();
                } catch (Exception e) {
                    e.printStackTrace();
                }
        }
    }


    /*
       * Initialize connection to the Domain Runtime MBean Server.
       */

    public static void initConnection(String hostname, String portString,
                                      String username,
                                      String password) throws IOException,
                                                              MalformedURLException {

        String protocol = "t3";
        Integer portInteger = Integer.valueOf(portString);
        int port = portInteger.intValue();
        String jndiroot = "/jndi/";
        String mbeanserver = DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME;

        JMXServiceURL serviceURL =
            new JMXServiceURL(protocol, hostname, port, jndiroot +
                              mbeanserver);

        Hashtable hashtable = new Hashtable();
        hashtable.put(Context.SECURITY_PRINCIPAL, username);
        hashtable.put(Context.SECURITY_CREDENTIALS, password);
        hashtable.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                      "weblogic.management.remote");
        hashtable.put("jmx.remote.x.request.waiting.timeout", new Long(10000));

        connector = JMXConnectorFactory.connect(serviceURL, hashtable);
        connection = connector.getMBeanServerConnection();
    }


    private static Ref constructRef(String refType, String serviceURI) {
        Ref ref = null;
        String[] uriData = serviceURI.split("/");
        ref = new Ref(refType, uriData);
        return ref;
    }


    /**
     * Finds the specified MBean object
     *
     * @param connection - A connection to the MBeanServer.
     * @return Object - The MBean or null if the MBean was not found.
     */
    public Object findDomainRuntimeServiceMBean(MBeanServerConnection connection) {
        try {
            ObjectName objectName =
                new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME);
            return (DomainRuntimeServiceMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                            objectName);
        } catch (MalformedObjectNameException e) {
            e.printStackTrace();
            return null;
        }
    }


    public static void main(String[] args) {
        try {
            if (args.length <= 0) {
                System.out.println("Provide values for the following parameters: HOSTNAME, PORT, USERNAME, PASSWORD.");

            } else {
                HashMap<String, String> map = new HashMap<String, String>();

                map.put("HOSTNAME", args[0]);
                map.put("PORT", args[1]);
                map.put("USERNAME", args[2]);
                map.put("PASSWORD", args[3]);
                OSBServiceExplorer osbServiceExplorer =
                    new OSBServiceExplorer(map);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

The post Oracle Service Bus 12.2.1.1.0: Service Exploring via WebLogic Server MBeans with JMX appeared first on AMIS Oracle and Java Blog.

Viewing all 163 articles
Browse latest View live