Quantcast
Channel: Languages – AMIS Technology Blog | Oracle – Microsoft Azure
Viewing all 163 articles
Browse latest View live

Automate the installation of Oracle JDK 8 and 10 on RHEL and Debian derivatives

$
0
0

Automating the Oracle JDK installation on RHEL derivatives (such as CentOS, Oracle Linux) and Debian derivatives (such as Mint, Ubuntu) differs. This is due to different package managers and repositories. In this blog I’ll provide quick instructions on how to automate the installation of Oracle JDK 8 and 10 on different Linux distributions. I chose JDK 8 and 10 since they are currently the only Oracle JDK versions which receive public updates (see here).

Debian derivatives

Benefit of using the below repositories is that you will often get the latest version and can easily update to the latest version in an existing installation if you want.

Oracle JDK 8

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections
sudo echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
sudo apt-get -y install oracle-java8-installer
sudo apt-get -y install oracle-java8-set-default

Oracle JDK 10

sudo add-apt-repository ppa:linuxuprising/java
sudo apt-get update
sudo echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections
sudo echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
sudo apt-get -y install oracle-java10-installer
sudo apt-get -y install oracle-java10-set-default

RHEL derivatives

Since RHEL derivatives are often provided by commercial software vendors such as RedHat and Oracle, they like to work on a subscription basis for their repositories since people pay for using them. Configuration of the specific repositories and subscriptions of course differs per vendor and product. For Oracle Linux you can look here. For RedHat you can look here.

The below described procedure makes you independent of vendor specific subscriptions, however you will not gain automatic updates and if you want to have the latest version you have to manually update the download URL from here and update the Java installation path in the alternatives commands. You also might encounter issues with the validity of the used cookie which might require you to update the URL.

Oracle JDK 8

sudo wget -O ~/jdk8.rpm -N --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.rpm
sudo yum -y localinstall ~/jdk8.rpm
sudo update-alternatives --install /usr/bin/java java /usr/java/jdk1.8.0_181-amd64/jre/bin/java 1
sudo update-alternatives --install /usr/bin/jar jar /usr/java/jdk1.8.0_181-amd64/bin/jar 1
sudo update-alternatives --install /usr/bin/javac javac /usr/java/jdk1.8.0_181-amd64/bin/javac 1
sudo update-alternatives --install /usr/bin/javaws javaws /usr/java/jdk1.8.0_181-amd64/jre/bin/javaws 1

Oracle JDK 10

sudo wget -O ~/jdk10.rpm -N --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/10.0.2+13/19aef61b38124481863b1413dce1855f/jdk-10.0.2_linux-x64_bin.rpm
sudo yum -y localinstall ~/jdk10.rpm
sudo update-alternatives --install /usr/bin/java java /usr/java/jdk-10.0.2/bin/java 1
sudo update-alternatives --install /usr/bin/jar jar /usr/java/jdk-10.0.2/bin/jar 1
sudo update-alternatives --install /usr/bin/javac javac /usr/java/jdk-10.0.2/bin/javac 1
sudo update-alternatives --install /usr/bin/javaws javaws /usr/java/jdk-10.0.2/bin/javaws 1

The post Automate the installation of Oracle JDK 8 and 10 on RHEL and Debian derivatives appeared first on AMIS Oracle and Java Blog.


Running Spring Tool Suite and other GUI applications from a Docker container

$
0
0

Running an application within a Docker container helps in isolating the application from the host OS. Running GUI applications like for example an IDE from a Docker container, can be challenging. I’ll explain several of the issues you might encounter and how to solve them. For this I will use Spring Tool Suite as an example. The code (Dockerfile and docker-compose.yml) can also be found here. Due to (several) security concerns, this is not recommended in a production environment.

Running a GUI from a Docker container

In order to run a GUI application from a Docker container and display its GUI on the host OS, several steps are needed;

Which display to use?

The container needs to be aware of the display to use. In order to make the display available, you can pass the DISPLAY environment variable to the container. docker-compose describes the environment/volume mappings/port mappings and other things of docker containers. This makes it easier to run containers in a quick and reproducible way and avoids long command lines.

docker-compose

You can do this by providing it in a docker-compose.yml file. See for example below. The environment indicates the host DISPLAY variable is passed as DISPLAY variable to the container.

Docker

In a Docker command (when not using docker-compose), you would do this with the -e flag or with –env. For example;

docker run --env DISPLAY=$DISPLAY containername

Allow access to the display

The Docker container needs to be allowed to present its screen on the Docker host. This can be done by executing the following command:

xhost local:root

After execution, during the session, root is allowed to use the current users display. Since the Docker daemon runs as root, Docker containers (in general!) now can use the current users display. If you want to persist this, you should add it to a start-up script.

Sharing the X socket

The last thing to do is sharing the X socket (don’t ask me details but this is required…). This can be done by defining a volume mapping in your Docker command line or docker-compose.yml file. For Ubuntu this looks like you can see in the image below.

Spring Tool Suite from a Docker container

In order to give a complete working example, I’ll show how to run Spring Tool Suite from a Docker container. In this example I’m using the Docker host JVM instead of installing a JVM inside the container. If you want to have the JVM also inside the container (instead of using the host JVM), look at the following and add that to the Dockerfile. As a base image I’m using an official Ubuntu image.

I’ve used the following Dockerfile:

FROM ubuntu:18.04

MAINTAINER Maarten Smeets <maarten.smeets@amis.nl>

ARG uid

LABEL nl.amis.smeetsm.ide.name="Spring Tool Suite" nl.amis.smeetsm.ide.version="3.9.5"

ADD https://download.springsource.com/release/STS/3.9.5.RELEASE/dist/e4.8/spring-tool-suite-3.9.5.RELEASE-e4.8.0-linux-gtk-x86_64.tar.gz /tmp/ide.tar.gz

RUN adduser --uid ${uid} --disabled-password --gecos '' develop

RUN mkdir -p /opt/ide && \
    tar zxvf /tmp/ide.tar.gz --strip-components=1 -C /opt/ide && \
    ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre && \
    chown -R develop:develop /opt/ide && \
    mkdir /home/develop/ws && \
    chown develop:develop /home/develop/ws && \
    mkdir /home/develop/.m2 && \
    chown develop:develop /home/develop/.m2 && \
    rm /tmp/ide.tar.gz && \
    apt-get update && \
    apt-get install -y libxslt1.1 libswt-gtk-3-jni libswt-gtk-3-java && \
    apt-get autoremove -y && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* && \
    rm -rf /tmp/*

USER develop:develop
WORKDIR /home/develop
ENTRYPOINT /opt/ide/sts-3.9.5.RELEASE/STS -data /home/develop/ws

The specified packages are required to be able to run STS inside the container and create the GUI to display on the host.

I’ve used the following docker-compose.yml file:

version: '3'

services:
    sts:
        build:
            context: .
            dockerfile: Dockerfile
            args:
                uid: ${UID}
        container_name: "sts"
        volumes:
            - /tmp/.X11-unix:/tmp/.X11-unix
            - /home/develop/ws:/home/develop/ws
            - /home/develop/.m2:/home/develop/.m2
            - /usr/lib/jvm/java-10-oracle:/usr/lib/jvm/java-10-oracle
            - /etc/java-10-oracle:/etc/java-10-oracle
        environment:
            - DISPLAY
        user: develop
        ports:
            "8080:8080"

Notice this docker-compose file has some dependencies on the host OS. It expects a JDK 10 to be installed in /usr/lib/jvm/java-10-oracle with configuration in /etc/java-10-oracle. Also it expects to find /home/develop/ws and /home/develop/.m2 to be present on the host to be mapped to the container. The .X11-unix mapping was already mentioned as needed to allow a GUI screen to be displayed. There are also some other things which are important to notice in this file.

User id

First the way a non-privileged user is created inside the container. This user is created with a user id (uid) which is supplied as a parameter. Why did I do that? Files in mapped volumes which are created by the container user will be created with the uid which the user inside the container has. This will cause issues if inside the container the user has a different uid as outside of the container. Suppose I run the container onder a user develop. This user on the host has a uid of 1002. Inside the container there is also a user develop with a uid of 1000. Files on a mapped volume are created with uid 1000; the uid of the user in the container. On the host however, uid 1000 is a different user. These files created by the container cannot be accessed by the develop user on the host (with uid 1002). In order to avoid this, I’m creating a develop user inside the VM with the same uid as the user used outside of the VM (the user in the docker group which gave the command to start the container).

Workspace folder and Maven repository

When working with Docker containers, it is a common practice to avoid storing state inside the container. State can be various things. I consider the STS application work-space folder and the Maven repository among them. This is why I’ve created the folders inside the container and mapped them in the docker-compose file to the host. They will use folders with the same name (/home/develop/.m2 and /home/develop/ws) on the host.

Java

My Docker container with only Spring Tool Suite was big enough already without having a more than 300Mb JVM inside of it (on Linux Java 10 is almost double the size of Java 8). I’m using the host JVM instead. I installed the host JVM on my Ubuntu development VM as described here.

In order to use the host JVM inside the Docker container, I needed to do 2 things:

Map 2 folders to the container:

And map the JVM path to the JRE folder onder STS: ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre.

Seeing it work

First as mentioned, allow access to the display:

xhost local:root

Since the build uses the variable UID, you should do:

export UID=$UID

Next build:

docker-compose build
Building sts
Step 1/10 : FROM ubuntu:18.04
---> 735f80812f90
Step 2/10 : MAINTAINER Maarten Smeets <maarten.smeets@amis.nl>
---> Using cache
---> 69177270763e
Step 3/10 : ARG uid
---> Using cache
---> 85c9899e5210
Step 4/10 : LABEL nl.amis.smeetsm.ide.name="Spring Tool Suite" nl.amis.smeetsm.ide.version="3.9.5"
---> Using cache
---> 82f56ab07a28
Step 5/10 : ADD https://download.springsource.com/release/STS/3.9.5.RELEASE/dist/e4.8/spring-tool-suite-3.9.5.RELEASE-e4.8.0-linux-gtk-x86_64.tar.gz /tmp/ide.tar.gz

---> Using cache
---> 61ab67d82b0e
Step 6/10 : RUN adduser --uid ${uid} --disabled-password --gecos '' develop
---> Using cache
---> 679f934d3ccd
Step 7/10 : RUN mkdir -p /opt/ide && tar zxvf /tmp/ide.tar.gz --strip-components=1 -C /opt/ide && ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre && chown -R develop:develop /opt/ide && mkdir /home/develop/ws && chown develop:develop /home/develop/ws && rm /tmp/ide.tar.gz && apt-get update && apt-get install -y libxslt1.1 libswt-gtk-3-jni libswt-gtk-3-java && apt-get autoremove -y && apt-get clean && rm -rf /var/lib/apt/lists/* && rm -rf /tmp/*
---> Using cache
---> 5e486a4d6dd0
Step 8/10 : USER develop:develop
---> Using cache
---> c3c2b332d932
Step 9/10 : WORKDIR /home/develop
---> Using cache
---> d8e45440ce31
Step 10/10 : ENTRYPOINT /opt/ide/sts-3.9.5.RELEASE/STS -data /home/develop/ws
---> Using cache
---> 2d95751237d7
Successfully built 2d95751237d7
Successfully tagged t_sts:latest

Next run:

docker-compose up

When you run a Spring Boot application on port 8080 inside the container, you can access it on the host on port 8080 with for example Firefox.

The post Running Spring Tool Suite and other GUI applications from a Docker container appeared first on AMIS Oracle and Java Blog.

Generic Docker Container Image for running and live reloading a Node application based on a GitHub Repo

$
0
0

My desire: find a way to run a Node application from a Git(Hub) repository using a generic Docker container and be able to refresh the running container on the fly whenever the sources in the repo are updated. The process of producing containers for each application and upon each change of the application is too cumbersome and time consuming for certain situations – including rapid development/test cycles and live demonstrations. I am looking for a convenient way to run a Node application anywhere I can run a Docker container – without having to build and push a container image – and to continuously update the running application in mere seconds rather than minutes. This article describes what I created to address that requirement.

Key ingredient in the story: nodemon – a tool that monitors a file system for any changes in a node.js application and automatically restarts the server when there are such changes. What I had to put together:

a generic Docker container based on the official Node image – with npm and a git client inside

  • adding nodemon (to monitor the application sources)
  • adding a background Node application that can refresh from the Git repository – upon an explicit request, based on a job schedule and triggered by a Git webhook
  • defining an environment variable GITHUB_URL for the url of the source Git repository for the Node application
  • adding a startup script that runs when the container is ran first (clone from Git repo specified through GITHUB_URL and run application with nodemon) or restarted (just run application with nodemon)

 

image

 

I have been struggling a little bit with the Docker syntax and operations (CMD vs RUN vs ENTRYPOINT) and the Linux bash shell scripts – and I am sure my result can be improved upon.

The Dockerfile that builds the Docker container with all generic elements looks like this:

FROM node:8
#copy the Node Reload server - exposed at port 4500
COPY package.json /tmp
COPY server.js /tmp
RUN cd tmp && npm install 
EXPOSE 4500 
RUN npm install -g nodemon
COPY startUpScript.sh /tmp
COPY gitRefresh.sh /tmp
CMD ["chmod", "+x",  "/tmp/startUpScript.sh"]
CMD ["chmod", "+x",  "/tmp/gitRefresh.sh"]
ENTRYPOINT ["sh", "/tmp/startUpScript.sh"]

Feel free to pick any other node base image – from https://hub.docker.com/_/node/. For example: node:10.

The startUpScript that is executed whenever the container is started up – that takes care of the initial cloning of the Node application from the Git(Hub) URL to directory /tmp/app and the running of that application using nodemon is shown below. Note the trick (inspired by StackOverflow) to run a script only when the container is ran for the very first time.

#!/bin/sh
CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER"
if [ ! -e $CONTAINER_ALREADY_STARTED ]; then
    touch $CONTAINER_ALREADY_STARTED
    echo "-- First container startup --"
    # YOUR_JUST_ONCE_LOGIC_HERE
    cd /tmp
    # prepare the actual Node app from GitHub
    mkdir app
    git clone $GITHUB_URL app
    cd app
    #install dependencies for the Node app
    npm install
    #start  both the reload app and (using nodemon) the actual Node app
    cd ..
    (echo "starting reload app") & (echo "start reload";npm start; echo "reload app finished") & 
    cd app; 
    echo "starting nodemon for app cloned from $GITHUB_URL";
    nodemon
else
    echo "-- Not first container startup --"
    cd /tmp
    (echo "starting reload app and nodemon") & (echo "start reload";npm start; echo "reload app finished") & (cd app; echo "start nodemon") &
    cd app; 
    echo "starting nodemon for app cloned from $GITHUB_URL";
    nodemon
fi

The startup script runs the live reloader application in the background – using (echo “start reload”;npm start)&. That final ampersand (&) takes care of running the command in the background. This npm start command runs the server.js file in /tmp. This server listens at port 4500 for requests. When a request is received at /reload, the application will execute the gitRefresh.sh shell script that performs a git pull in the /tmp/app directory where the git clone of the repository was targeted.

/* this program listens for /reload request at port 4500. 
or a GitHub WebHook trigger (see: https://technology.amis.nl/2018/03/20/handle-a-github-push-event-from-a-web-hook-trigger-in-a-node-application/)
When it receives such a request, it will perform a Git pull in the app sub directory (from where this application runs) 

TODO
- add the option to schedule an automatic periodic git pull

- use https://www.npmjs.com/package/simple-git instead of shelljs plus local Git client (this could allow usage of a lighter base image - e.g. node-slim)
*/

const RELOAD_PATH = '/reload'
const GITHUB_WEBHOOK_PATH = '/github/push'

var http = require('http');
var server = http.createServer(function (request, response) {
    console.log(`method ${request.method} and url ${request.url}`)
    if (request.method === 'GET' && request.url === RELOAD_PATH) {
        console.log(`reload request starting at ${new Date().toISOString()}...`);
        refreshAppFromGit();
        response.write(`RELOADED!!${new Date().toISOString()}`);
        response.end();
        console.log('reload request handled...');
    } 
    else if (request.method === 'POST' && request.url === GITHUB_WEBHOOK_PATH) {
        let body = [];
        request.on('data', (chunk) => {
            body.push(chunk);
          }).on('end', () => {
            body = Buffer.concat(body).toString();
            // at this point, `body` has the entire request body stored in it as a string
          
        console.log(`GitHub WebHook event handling starting ${new Date().toISOString()}...`);
        var githubEvent = JSON.parse(body)
        console.debug(`github event: ${JSON.stringify(githubEvent)}`)
        // - githubEvent.head_commit is the last (and frequently the only) commit
        // - githubEvent.pusher is the user of the pusher pusher.name and pusher.email
        // - timestamp of final commit: githubEvent.head_commit.timestamp
        // - branch:  githubEvent.ref (refs/heads/master)
        try {
        var commits = {}
        if (githubEvent.commits)
            commits = githubEvent.commits.reduce(
                function (agg, commit) {
                    agg.messages = agg.messages + commit.message + ";"
                    agg.filesTouched = agg.filesTouched.concat(commit.added).concat(commit.modified).concat(commit.removed)
                    //                        .filter(file => file.indexOf("src/js/jet-composites/input-country") > -1)
                    return agg
                }
                , { "messages": "", "filesTouched": [] })

           var push = {
            "finalCommitIdentifier": githubEvent.after,
            "pusher": githubEvent.pusher,
            "timestamp": githubEvent.head_commit.timestamp,
            "branch": githubEvent.ref,
            "finalComment": githubEvent.head_commit.message,
            "commits": commits
        }
        console.log("WebHook Push Event: " + JSON.stringify(push))
        if (push.commits.filesTouched.length > 0) {
            console.log("This commit involves changes to the Node application, so let's perform a git pull ")
            refreshAppFromGit();
        }
    } catch (e) {
        console.error("GitHub WebHook handling failed with error "+e)
    }

        response.write('handled');
        response.end();
        console.log(`GitHub WebHook event handling complete at ${new Date().toISOString()}`);
    });
    }
    else {
        // respond
        response.write('Reload is live at path ' + RELOAD_PATH);
        response.end();
    }
}); //http.createServer
server.listen(4500);
console.log('Server running and listening at Port 4500');

//https://stackoverflow.com/questions/44647778/how-to-run-shell-script-file-using-nodejs
// https://www.npmjs.com/package/shelljs

var shell = require('shelljs');
var pwd = shell.pwd()
console.info(`current dir ${pwd}`)

function refreshAppFromGit() {
    try {
        if (shell.exec('./gitRefresh.sh').code !== 0) {
            shell.echo('Error: Git Pull failed');
//            shell.exit(1);
        } else {
            //        shell.exec('npm install')
            //  shell.exit(0);
        }
    } catch (e) {
        console.error("Error while trying to execute ./gitRefresh " + e)
    }
}

Using the node-run-live-reload image

Now that you know a little about the inner workings of the image, let me show you how to use it (also see instructions here: https://github.com/lucasjellema/docker-node-run-live-reload).

To build the image yourself, clone the GitHub repo and run

docker build -t "node-run-live-reload:0.1" .

using of course your own image tag if you like. I have pushed the image to Docker Hub as lucasjellema/node-run-live-reload:0.1. You can use this image like this:

docker run --name express -p 3011:3000 -p 4505:4500  -e GITHUB_URL=https://github.com/shapeshed/express_example -d lucasjellema/node-run-live-reload:0.1

In the terminal window – we can get the logging from within the container using

docker logs express --follow

SNAGHTML45e2d6cf

After the application has been cloned from GitHub, npm has installed the dependencies and nodemon has started the application, we can access it at <host>:3011 (because of the port mapping in the docker run command):

image

When the application sources are updated in the GitHub repository, we can use a GET request (from CURL or the browser) to <host>:4505 to refresh the container with the latest application definition:

image

The logging from the container indicates that a git pull was performed – and returned no new sources:

image

Because there are no changed files, nodemon will not restart the application in this case.

A GitHub WebHook can be configured on the GitHub Reppository. It should be configured with the endpoint host:4500/github/push. When this is in place – and the host is exposed on the public internet – then any commit to the application’s GitHub repository will send a signal to the reload utility in the container (similar to the manual call to the /reload endpoint) and the refresh of the application takes place (git pull, npm install, restart by nodemon).

One requirement at this moment for this generic container to work is that the Node application has a package.json with a scripts.start entry in its root directory; nodemon expects that entry as instruction on how to run the application. This same package.json is used with npm install to install the required libraries for the Node application.

Summary

The next figure gives an overview of what this article has introduced. If you want to run a Node application whose sources are available in a GitHub repository, then all you need is a Docker host and these are your steps:

  1. Pull the Docker image: docker pull lucasjellema/node-run-live-reload:0.1
    (this image currently contains the Node 8 runtime, npm, nodemon, a git client and the reloader application)
    Alternatively: build and tag the container yourself.
  2. Run the container image, passing the GitHub URL of the repo containing the Node application; specify required port mappings for the Node application and the reloader (port 4500): docker run –name express -p 3011:3000 -p 4500:4500  -e GITHUB_URL=<GIT HUB REPO URL> -d lucasjellema/node-run-live-reload:0.1
  3. When the container is started, it will clone the Node application from GitHub
  4. Using npm install, the dependencies for the application are installed
  5. Using nodemon the application is started (and the sources are monitored so to restart the application upon changes)
  6. Now the application can be accessed at the host running the Docker container on the port as mapped per the docker run command
  7. With an HTTP request to the /reload endpoint, the reloader application in the container is instructed to
  8. git pull the sources from the GitHub repository and run npm install to fetch any changed or added dependencies
  9. if any sources were changed, nodemon will now automatically restart the Node application
  10. the upgraded Node application can be accessed

 

image

Next Steps

Some next steps I am contemplating with this generic container image – and I welcome your pull requests – include:

  • allow an automated periodic application refresh to be configured through an environment variable on the container (and/or through a call to an endpoint on the reload application) instructing the reloader to do a git pull every X seconds.
  • use https://www.npmjs.com/package/simple-git instead of shelljs plus local Git client (this could allow usage of a lighter base image – e.g. node-slim instead of node)
  • force a restart of the Node application – even it is not changed at all
  • allow for alternative application startup scenarios besides running the scripts.start entry in the package.json in the root of the application

 

Resources

GitHub Repository with the resources for this article – including the Dockerfile to build the container: https://github.com/lucasjellema/docker-node-run-live-reload

My article on my previous attempt at creating a generic Docker container for running a Node application from GitHub: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/

Article and Documentation on nodemon: https://medium.com/lucjuggery/docker-in-development-with-nodemon-d500366e74df and https://github.com/remy/nodemon#nodemon

NPM module shelljs that allows shell commands to be executed from Node applications: https://www.npmjs.com/package/shelljs

The post Generic Docker Container Image for running and live reloading a Node application based on a GitHub Repo appeared first on AMIS Oracle and Java Blog.

Running Reactive Spring Boot on GraalVM in Docker

$
0
0

GraalVM is an open source polyglot VM which makes it easy to mix and match different languages such as Java, Javascript and R. It has the ability (with some restrictions) to compile code to native executables. This of course offers great performance benefits. Recently, GraalVM Docker files and images have become available. See here.

Since Spring Boot is a popular Java framework and reactive (non blocking) RESTful services/clients implemented in Spring Boot are also interesting to look at, I thought; lets combine those and produce a Docker image running a reactive Spring Boot application on GraalVM.

I’ve used and combined the following

As a base I’ve used the code provided in the following Git repository here. In the ‘complete’ folder (the end result of the tutorial) is a sample Reactive RESTful Web Service and client.

The reactive Spring Boot RESTful web service and client

When looking at the sample, you can see how you can implement a non-blocking web service and client. Basically this means you use;

  • org.springframework.web.reactive.function.server.ServerRequest and ServerResponse and instead of the org.springframework.web.bind.annotation.RestController
  • Mono<ServerResponse> for the response of the web service
  • for a web service client you use org.springframework.web.reactive.function.client.ClientResponse and Mono<ClientResponse> for getting a response
  • since you won’t use the (classic blocking) RestController with the RequestMapping annotations, you need to create your own configuration class which defines routes using org.springframework.web.reactive.function.server.RouterFunctions

Since the response is not directly a POJO, it needs to be converted into one explicitly like with res.bodyToMono(String.class). For more details look at this tutorial or browse this repository

Personally I would have liked to have something like a ReactiveRestController and keep the rest (pun intended) the same. This would make refactoring to reactive services and clients more easy.

GraalVM

GraalVM is a polyglot VM open sourced by Oracle. It has a community edition and enterprise edition which provides improved performance (a smaller footprint) and better security (sandboxing capabilities for native code) as indicated here. The community edition can be downloaded from GitHub and the enterprise edition from Oracle’s Technology Network. Support for GraalVM for Windows is currently still under development and not released yet. A challenge for Oracle with GraalVM will be to keep the polyglot systems it supports up to date version wise. This already was a challenge with for example the R support in Oracle database and Node support in Application Container Cloud Service. See here.

When you download GraalVM CE you’ll get GraalVM with a specific OpenJDK 8 version (for GraalVM 1.0.0-rc8 this is 1.8.0_172). When you download GraalVM EE from OTN, you’ll get Oracle JDK 8 of the same version.

GraalVM and LLVM

GraalVM supports LLVM. LLVM is a popular toolset to provide language agnostic compilation and optimization of code for specific platforms. LLVM is one of the reasons many programming languages have starting popping up recently. Read more about LLVM here or visit their site here. If you can compile a language into LLVM bitcode or LLVM Intermediate Representation (IR), you can run it on GraalVM (see here). The LLVM bitcode is additionally optimized by GraalVM to receive even better results.

GraalVM and R

GraalVM uses FastR which is based on GNU-R, the reference implementation of R. This is an alternative implementation of the R language for GraalVM and thus not actual R! For example: ‘support for dplyr and data.table are on the way’. Read more here. Especially if you use exotic packages in R, I expect there to be compatibility issues. It is interesting to compare the performance of FastR on GraalVM to compiling R code to LLVM instructions and run that on GraalVM (using something like RLLVMCompile). Haven’t tried that though. GraalVM seems to have momentum at the moment and I’m not so sure about RLLVMCompile.

Updating the JVM of GraalVM

You can check out the following post here for building GraalVM with a JDK 8 version. This refers to documentation on GitHub here.

“Graal depends on a JDK that supports a compatible version of JVMCI (JVM Compiler Interface). There is a JVMCI port for JDK 8 and the required JVMCI version is built into the JDK as of JDK 11 (build 20 or later).”

I have not tried this but it seems thus relatively easy to compile GraalVM from sources with support for a different JDK.

GraalVM in Docker

Oracle has recently provided GraalVM as Docker images and put the Dockerfile’s in their Github repository. See here. These are only available for the community edition. Since the Dockerfiles are provided on GitHub, it is easy to make your own GraalVM EE images if you want (for example want to test with GraalVM using Oracle JDK instead of OpenJDK).

To checkout GraalVM you can run the container like:


docker run -it oracle/graalvm-ce:1.0.0-rc8 bash

bash-4.2# gu available
Downloading: Component catalog
ComponentId Version Component name
----------------------------------------------------------------
python 1.0.0-rc8 Graal.Python
R 1.0.0-rc8 FastR
ruby 1.0.0-rc8 TruffleRuby

Spring Boot in GraalVM in Docker

How to run a Spring Boot application in Docker is relatively easy and described here. I’ve run Spring Boot applications on various VM’s also and described the process on how to achieve this here. As indicated above, I’ve used this Ubuntu Development VM.


sudo apt-get install maven
git clone https://github.com/spring-guides/gs-reactive-rest-service.git
cd gs-reactive-rest-service/complete

Now create a Dockerfile:


FROM oracle/graalvm-ce:1.0.0-rc8
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

Edit the pom.xml file

Add to the properties tag a prefix variable:


<properties>
<java.version>1.8</java.version>
<docker.image.prefix>springio</docker.image.prefix>
</properties>

Add a build plugin


<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.3.6</version>
<configuration> <repository>${docker.image.prefix}/${project.artifactId}</repository>
<buildArgs><JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
</plugins>
</build>

Now you can do:


mvn clean package
mvn dockerfile:build

And run it:


docker run -p 8080:8080 -t springio/gs-reactive-rest-service:latest

It’s as simple as that!

The post Running Reactive Spring Boot on GraalVM in Docker appeared first on AMIS Oracle and Java Blog.

Oracle APEX: the low-code and low-cost application middle tier

$
0
0

Oracle APEX is a low code application development framework. It can be used free of charge – either as part of an existing Oracle Database license or running in the free Oracle Database 18c XE product. An Oracle APEX application should be considered a three-tier application – consisting of a client tier (the browser), the middle tier (the APEX application engine) and the data tier (back end databases and REST APIs on top of various systems and data stores).

A way to visualize the multi-tier architecture for the most common approach to APEX applications is the following:

image

The application is used by the end user in a browser. From that browser, HTTP requests are made to the ORDS (Oracle REST Data Services) listener – running on a web server in the DMZ. Requests for the APEX application engine are passed to the PL/SQL packages that make up this engine. The APEX application engine runs in an Oracle Database – which could well be the light weight and free Oracle Database Express Edition.

Whenever data must be retrieved in order to handle a request – or data should be manipulated as result of a user action – the APEX middle tier will reach out to the backend system. This will frequently be an Oracle Database – but it does not have to be.

The next figure makes the role of the Oracle Database a little bit clearer. It is very important to realize the (logical) decoupling between the APEX application engine and the Oracle Database that contains the business data presented and manipulated in the APEX application. In most APEX applications, the Oracle Database will appear in three locations. It could be the same Oracle Database instance in all three cases – but it does not have to be.

ORDS is a REST server with its own meta data defining “modules” and “templates”.  The meta data for ORDS can be stored in any Oracle database (in an ORDS schema), so it could be the local database serving APEX or another database.  ORDS may be capable to work without a database repo and just use XML files for a highly optimized runtime, with no metadata to look up (this is not currently a supported feature – although the capability exists for internal use at Oracle for example in SQL Developer Web – but it is on the roadmap).

The APEX application engine has its local database – an Oracle Database instance – that contains the meta-data that describes the application itself (pages, fields, navigation, validation logic, and more). It also holds the relevant session data: APEX is a stateless engine; the user session state is held partly in the client and partly in the APEX database. Note: the size of this session state is very small – typically just a few KB. The APEX database can also be used as data cache – to retain local, quickly accessible, read only copies of data retrieved from remote sources.

The APEX application can reach out to business data in the local database in which it is running itself as well to a PDB co-located within the same container database (in a shared multi-tenant instance, you would use database link syntax – with schema.objectname@PDB_LINK- and when the PDBs for APEX and the business data are in the same root, the Oracle Database can transparently re-write queries expressed in Database Link syntax to use SQL that is effectively local). It can also access business data in a different Oracle Database across a database link or through an ORDS instance that exposes HTTP access to packages and tables in this other database instance.

 

image

 

Mike Hichwa, VP Software Development at Oracle and responsible among other products for APEX, has many insights with me, some of which I have paraphrased here:

“Accessing data in a PDB from APEX in a different PDB (with both PDBs in the same CDB) can be done today via database link syntax.  The read access is optimized but updates do perform distributed transactions.  The intra-PDB access is being improved and optimized. You may or will be able to use simplified syntax (e.g. not database links) and that updates without two phase commit / distributed transactions. Database links and gateways (for non-Oracle databases) can be used, but would not be recommended for applications with large numbers of end users (so personally I like database links / gateways for ETL but not for general web apps)”

The idea that APEX is only suitable for low code application development on top of an Oracle Database is no longer correct. Through database links and gateways, APEX has long been able to [make to] run against business data in different databases. With the Oracle Database capabilities to call out to Web Services (SOAP and REST, for example using the UTL_HTTP package) it was also quite possible – albeit a bit cumbersome – to create an APEX application on external data sources. With the recent APEX feature called Web Source Module it has become largely a declarative – low code! – action to retrieve data from or manipulate data through a remote REST API.

With Web Source Modules, any REST API becomes a viable data source for APEX applications. Low code application development against virtually any backend becomes a reality. This is depicted in the next figure. Here you see how the APEX application shows and potentially creates and updates data from various types of databases (through REST APIs or possibly through heterogenuous gateways) , microservices and serverless functions as well as from Oracle SaaS applications.

 

image

 

Note: ORDS is expected to have support for MySQL sometime in 2019. That would enable quick, declarative exposure of tables in MySQL through a generated REST API. ORDS also has support for TimesTen in-memory database on its 2019 roadap. For other databases – SQL or NoSQL – a REST API has to be developed. For this, several tools and frameworks are available and of course implementing REST APIs is quite straightforward these days.

APEX is not only a Low Code Application Middle Tier – it is also a Low Cost Application Middle Tier

Low code development is attractive because of the high productivity and high quality (robustness) that can be achieved with a relatively low investment in technological skills and knowledge. A quick time to market can be realized. All of this applies primarily if functional requirements can be met by the out of the box capabilities of the low code framework.

The cost of low code development is of course also determined by the cost of the tooling and run time infrastructure that is required. With APEX, this cost is extremely low. The required components for developing and running an APEX application are ORDS and APEX on an Oracle Database. ORDS can be used for free and can run on a free web server like Apache Tomcat, Jetty or GlassFish (note: The documentation does state that “Glassfish Server support will be desupported in a future release”) . The database used for running APEX can be Oracle Database 18c XE – free as well!

And, as discussed before, the business data can be held in various data stores – from Oracle Database (any edition including the free XE) to MySQL or other open source databases, either SQL/ACID or NoSQL/BASE.

 

image

 

APEX – more than just low code

The term “low code” (associated with the Citizen Developer) is a catchphrase that perhaps does not deal well with close scrutiny. The essence of software development is not coding – as in writing lines of program code. It is much more about capturing the logic associated with functional requirements – in a structured way such that a machine can execute the logic. How you instruct the machine – with low level code or high level no code is not so relevant in my book. Low code frameworks can help speed up the process of laying down the machine instruction – and improving the quality of the instruction by providing a framework within it is created with reusable constructs and visual representation.

One of the challenges with low code platforms can be that the abstract high level language for describing the application behavior may not have enough expressiveness to capture all nuances stated in the business requirements. A lower level programming model may be needed then to capture the nuances and subtleties. As Joel Kallman – Director of Software Development at Oracle, responsible for APEX – states:

“APEX has the important ability to gracefully transition from No Code to Low Code to High Control.  Many customers can live within the “black box” of APEX and do no coding.  But everyone needs to customize, and the way you customize is with code.  With APEX, you can use a very small amount of code (snippets, as we call it) to customize your application.  It could be a snippet of PL/SQL or snippet of CSS, HTML or JavaScript.  Most low code frameworks abruptly go from no code to a high control (or “high code”) environment, with no middle ground.  Once you go into high code, you’ve lost all but the most professional of developers.  With APEX, it’s a very elegant and seamless transition. And for those who demand high control (high code), you can still exploit pre-compiled PL/SQL packages or JavaScript libraries or completely customized HTML templates & themes.  “

The post Oracle APEX: the low-code and low-cost application middle tier appeared first on AMIS Oracle and Java Blog.

Monitoring Spring Boot applications with Prometheus and Grafana

$
0
0

In order to compare the performance of different JDKs for reactive Spring Boot services, I made a setup in which a Spring Boot application is wrapped in a Docker container. This makes it easy to create different containers for different JDKs with the same Spring Boot application running in it. The Spring Boot application exposes metrics to Prometheus. Grafana can read these metrics and allows to make nice visualizations from it. This blog post describes a setup to get you up and running in minutes. A next post will show the JDK comparisons. You can download the code here (in the complete folder). To indicate how easy this setup is, getting it up and running and write this blog post took me less than 1.5 hours total. I did not have much prior knowledge on Prometheus and Grafana save for a single workshop at AMIS by Lucas Jellema (see here).

Wrapping Spring Boot in a Docker container

Wrapping Spring Boot applications in a Docker container is easy. See for example here. You need to do the following:

Create a Dockerfile as followed (change the FROM entry to get a different JDK)

Add a plugin to the pom.xml file.

And define the property used:

Now you can do mvn clean package dockerfile:build and it will create the Docker image springio/gs-reactive-rest-service:latest for you. You can run this with: docker run -p 8080:8080 -t springio/gs-reactive-rest-service:latest

Making Prometheus style metrics available from Spring Boot

In order to make Prometheus metrics available from the String Boot application, some dependencies need to be added (see here).

Now you can run the Docker container and go to an URL like: http://localhost:8080/actuator/prometheus and you will see something like:

Provide Prometheus configuration

I’ve provided a small configuration file to make Prometheus look at the metrics URL from Spring Boot (see here):

Putting Spring Boot, Prometheus and Grafana together

As you can see in the above screenshot, I’ve used the hostname spring-boot. I can do this because of the docker compose configuration, container_name. This is as you can see below:

Grafana and Prometheus are the official Docker images for those products. I’ve added the previously mentioned configuration file to the Prometheus instance (the volumes entry under prometheus).

Now I can do docker-compose up and it will start Spring Boot (available at localhost:8080), Prometheus with the configuration file (available at localhost:9090) and Grafana (available at localhost:3000). They will be put in the same Docker network and can access each other by the hostnames ‘prometheus’, ‘grafana’ and spring-boot’

Configure Grafana

In Grafana it is easy to add Prometheus as a data source.

When you have done this, you can add dashboards. An easy way to do this is to create a simple query in Prometheus and copy it to Grafana to create a graph from it. There are probably better ways to do thus but I have yet to dive into Grafana to learn more about its capabilities.

Finally

It is easy and powerful to monitor a Spring Boot application using Prometheus and Grafana. Using a docker-compose file, it is also easy to put an assembly together to start/link the different containers. This makes it easy to start fresh if you want to.

To try it out for yourself do the following (I’ve used the following VM (requires Vagrant and VirtualBox to build) with docker, docker-compose and maven preinstalled: here)


git clone https://github.com/MaartenSmeets/gs-reactive-rest-service
cd gs-reactive-rest-service/complete
mvn clean package
mvn dockerfile:build
docker-compose up

Then you can access the previously specified URL’s to access the Spring Boot application, Prometheus and Grafana.

The post Monitoring Spring Boot applications with Prometheus and Grafana appeared first on AMIS Oracle and Java Blog.

Session Catalog for Oracle OpenWorld & CodeOne 2018 in JSON files

$
0
0

The Oracle OpenWorld and CodeOne 2018 conferences took place in the last week of October 2018. Together, over 1500 sessions took place. The session catalog with details for all sessions is accessible at https://events.rainfocus.com/widget/oracle/oow18/catalogcodeone18 – at least at the time of writing. The session information includes title and abstract, track, session type, names of speakers, download link for the slides and more.

image

 

The data in the session catalog represents an interesting data set that could easily be used for data visualization, machine learning, business analytics and demos of web & mobile & chat application. Therefore, it could be convenient to have easy access to the data set in the session catalog.

By inspecting the HTTP calls made from the Session Catalog Web Application, it is easy to learn how the session data can be retrieved from the REST API that exposes it.

SNAGHTML1804b7d6

With a fairly simple Node application, HTTP requests can be made to retrieve all session data per session type and per batch of 50 sessions – see https://github.com/lucasjellema/Oracle-OpenWorld-CodeOne-2018-SessionCatalog/blob/master/produce-oow-catalog.js. Note how function delay(t) is used to schedule a function call.

Executing this Node program for all session types results in JSON files with session details for session type. The data in these JSON files looks like this in an online JSON viewer:

image

This data can be used for various applications and demonstrations.

A more useful file can be compiled – a file that contains all sessions and for all sessions only relevant details (and for example not all internal identifiers). This file is oow2018-sessions-catalog.json . It has been created by the Node program  compile-oow-catalog-files.js. This program leverages the JSONata language for retrieving data from complex JSON documents as well as transforming JSON documents into target documents with a different structure. JSONata is comparable in function to XPath and XSLT or XQuery. The npm module JSONata was used in this case and proved very useful.

var catalog = []
// here follows the JSONata expression - equivalent to the XSLT template or XQuery mapping
var expression = jsonata(
    ` $.{
        'code': code
        ,'title': title
        ,'abstract': abstract
        ,'duration': length
        ,'event': eventName
        ,'type': type
        ,'waitlistPeak' : waitlistLimit
        ,'speakers': participants.{'name':(firstName & ' ' & lastName)
                                  , 'company': companyName
                                  , 'jobTitle': jobTitle
                                  , 'biography' : bio
                                  , 'photoURL' : photoURL
                                  , "twitter": attributevalues[attribute_id='twitter'].value
                                  , "designations": attributevalues[attribute_id='specialdesignations'].value
                                  }
        , 'slots' : times.{ 'room': room
                          , 'date': date
                          , 'time': time   
                          , 'roomCapacity' :capacity                  
                          }                             
        , 'levels' :  attributevalues[attribute_id='SessionsbyExperienceLevel'].value
        , 'roles' :  attributevalues[attribute_id='SessionsbyYourRole'].value
        , 'track' :  attributevalues[attribute_id='Track'].value                  
        , 'slidesToDownload' : files. 
                               { "name": filename
                               ,'url':url
                               }
    }
    `);

for (sessionType of sessionTypes) {
    var file = oow2018Filename + sessionType + ".json"
    if (fs.existsSync(file)) {
        console.log("Processing " + sessionType)
        var buf = fs.readFileSync(oow2018Filename + sessionType + ".json");
        var sessions = JSON.parse(buf)

        var result = expression.evaluate(sessions);
        catalog = catalog.concat(result)
    }//if
}//for
fs.writeFileSync(oow2018Filename + '.json', JSON.stringify(catalog, null, '\t'));

 

Resources

GitHub repo with the Node source code as well as the JSON files created for the session catalog data: https://github.com/lucasjellema/Oracle-OpenWorld-CodeOne-2018-SessionCatalog

Online JSON Inspector: http://jsonviewer.stack.hu/

Earlier article on retrieving Session Catalog for Oracle OpenWorld and JavaOne 2017: https://technology.amis.nl/2017/08/10/when-screen-scraping-became-api-calling-gathering-oracle-openworld-2017-session-catalog-with-node/

JSONata – home page: http://jsonata.org/

JSONata – in browser try out page – http://try.jsonata.org/

The post Session Catalog for Oracle OpenWorld & CodeOne 2018 in JSON files appeared first on AMIS Oracle and Java Blog.

Devoxx Belgium 2018, From Developers For Developers

$
0
0

Last November, I attended the Devoxx Belgium 2018 event.

In this article I gathered some practical information about the event, and the sessions that were held. Perhaps this gives you more insight in what you can expect from such an event and it may help you in deciding if, as it was for me, such an event is an opportunity for you to gather/share information. For me, this was the first Devoxx event I attended.

For more information, please see: https://devoxx.be/

Devoxx

Devoxx (previously known as JavaPolis) was founded by Stephan Janssen. Devoxx is a series of tech events organized by local community groups. There mantra is “Content is King”, as a result these events see both internationally renowned speakers and local rock stars.

Devoxx is spearheaded and supported by local communities. This means each event retains a unique regional flavor, whilst being part of the overall Devoxx movement. A spirit of openness, community, killer content and super-low priced tickets underpin the Devoxx philosophy.

Topics covered at Devoxx fall under the same radar as Voxxeddays.com, including: Server Side Java, Java Language, Cloud and Big Data, Web & HTML, Mobile, Programming Languages, Architecture & Security, Methodology, Culture and Future Technologies.
[https://beta.devoxx.com/#/about]

Devoxx Belgium 2018

From Monday 12 November to Friday 16 November 2018, the Devoxx Belgium 2018 event was held in Antwerp. There were 221 sessions and 3200 attendees.

This particular event was sold out already in June 2018. So be aware that if you want to attend such an event, you have to be on top of it.

Tickets

With regard to tickets, the following options were available:

  • Deep Dive pass (Monday & Tuesday)
  • Conference pass (Wednesday – Friday)
  • Combi pass (Monday – Friday)

[https://devoxx.be/faq]

Different kind of sessions

During the 5 days Devoxx Belgium 2018 event different kind of sessions were held with a specific time period:

  • Keynote
    The opening keynote sessions will inspire you and is a great way to warm-up your event experience.
  • Hands-on Lab
    The hands-on labs allow you to get practical information about a selected subject in a class room setup.
  • Deep Dive
    In depth sessions (3 hours) allowing the speakers to go much deeper into a technical subject.
  • Conference
    The conference sessions are 50 minutes which represents the majority of the conference presentations.
  • Tools-in-Action
    The Tools-in-action are short sessions of 30 minutes, focusing on demonstrating tools, technical tools or solutions.
  • Quickie
    These are very short presentations of 15 minutes. An ideal format for beginning speakers trying to pick up some speaker credits. Or for the more seasoned speakers that want to explain a subject in only 15 minutes.
  • BOF
    BOF (Bird of a feather) sessions, are informal sessions where technical subjects are discussed in an open-minded atmosphere.

[https://dvbe18.confinabox.com/index.html]

Content tracks

The sessions can be divided into the following content tracks:

  • Cloud, Containers & Infrastructure
    Serverless, Docker, Kubernetes, Mesos, Service Mesh, Cloud, PaaS, Vagrant, etc
  • Java Language
    Java language, Java SE, JDK, performance, tuning, modularity, etc
  • Architecture & Security
    How-Tos, strategies, tools, techniques and best practices for getting architecture right. Anything related to security and development
  • Programming languages
    Other languages running on the JVM, functional, emerging languages, tools, libraries, etc
  • Big Data & Machine Learning
    Big Data, NoSQL, Spark, Hadoop, Drill, Machine learning, etc
  • Modern Web & UX
    Progressive web apps, Polymer, Web frameworks, Reactive, libraries, languages and tools to build web apps
  • Methodology & Culture
    Software development methodologies, developer culture, DevOps, CI/CD, etc
  • Server Side Java
    Java EE, App servers, Spring Framework, MOM, Related JSRs, etc
  • Mobile & IoT
    Android, iOS, Xamarin, IOT, Embedded, M2M, smart objects, connectivity, security, etc
  • Mind the Geek
    Developer candy: stuff we want to know about but dont (generally) at work, Robotics, biological computing, cybernetics, AI, new toys, tomorrows world

[https://dvbe18.confinabox.com]

Exhibition Floor

Tech companies from around the world show case their solutions, products and services on Tuesday onwards until Thursday evening.
[https://devoxx.be/]

Some examples of international speakers that were at the event


Venkat Subramaniam
Agile Developer
Dr. Venkat Subramaniam is an award-winning author, founder of Agile Developer, Inc., creator of agilelearner.com, and an instructional professor at the University of Houston. He has trained and mentored thousands of software developers in the US, Canada, Europe, and Asia, and is a regularly-invited speaker at several international conferences. Venkat helps his clients effectively apply and succeed with sustainable agile practices on their software projects. Venkat is a (co)author of multiple technical books, including the 2007 Jolt Productivity award winning book Practices of an Agile Developer. You can find a list of his books at agiledeveloper.com. You can reach him by email at venkats@agiledeveloper.com or on twitter at @venkat_s

[https://beta.devoxx.com/#/speaker/49]

Mark Reinhold
Oracle
Mark Reinhold is Chief Architect of the Java Platform Group at Oracle. His past contributions to the platform include character-stream readers and writers, reference objects, shutdown hooks, the NIO high-performance I/O APIs, library generification, service loaders, and the Jigsaw module system. Mark has held key leadership roles in every Java SE and JDK release since version 1.2, in 1998. He currently leads the JDK Project in the OpenJDK Community, where he also serves on the Governing Board. Mark holds a Ph.D. in computer science from the Massachusetts Institute of Technology.

[https://dvbe18.confinabox.com/talk/YEF-3619/Java_in_2018:_Change_is_the_Only_Constant_-_Overflow]


James Gosling
Distinguished Engineer
Amazon Web Services
Best known as the founder and lead designer behind the Java programming language

[https://en.wikipedia.org/wiki/James_Gosling]

“Devoxx” mobile app


Before the event started an app was made available. The official schedule app is named “Devoxx” and is officially supported on iOS and Android by the Gluon & Devoxx team.

Install this app and you’re set for all the future Devoxx and VoxxedDays events. Select your preferred event and start scheduling your sessions.

For downloads and more information, please see:
https://devoxx.be/the-official-my-devoxx-mobile-apps/

Features of the app are:

  • Supports all Devoxx events, one app to rule them all!
  • Create a personal schedule by favouriting talks
  • Sync your favourite presentations
  • Rate talks
  • Scan attendee badges
  • Schedule works offline

For me, the schedule with all the sessions was very helpful. Being able to see in advance which sessions were the most favorite, helped me in some cases to decide to which sessions I wanted to go.

Detailed information about the content of a session is also available. For example (see above): 9 steps to Awesome with Kubernetes, by Burr Sutter.

Of course, the schedule and information about all the sessions is also available on line. Please see:
https://dvbe18.confinabox.com/index.html

Top Rated Talks of the Week

During the week, between sessions, the “Top Rated Talks of the Week” are shown. Below you see the status on November 15th somewhere in the morning:

Depending on the speaker and the content at some sessions the room is loaded with attendees.

Information about the sessions

All sessions are streamed live, so you can watch every session – as it happens or afterwards via YouTube:
Presentations

All the photos taken at the event can be viewed at flickr: Pictures

At the event al so paper schedules were handed out:

Whilst preparing this blog article I also found a Devoxx Belgium 2018 REST API, that serves the following content:

  • A collection of Conferences
  • A conference
  • A list of speakers
  • A speaker
  • A talk
  • A set of schedules
  • A schedule
  • A link
  • The list of event’s talks formats (conference, labs, keynote…)
  • The list of event’s tracks
  • The list of Rooms
  • A schedule for a specific room and a specific day

Please see: https://dvbe18.confinabox.com/api

Top-100-Rated-Talks for #Devoxx Belgium 2018

After the event, via twitter, the Top-100-Rated-Talks for #Devoxx Belgium 2018 were made available.

[https://twitter.com/devoxx/status/1064452715034669056?lang=en]

For your convenience I added the Top 25 Rated-Talks for #Devoxx Belgium 2018 in a more readable form:

Important news/sessions at Devoxx

I leave it up to you to have a look at the sessions that were held and that may be interesting to you to have a look at.

However, I want to share with you the surprise appearance that James Gosling (the founder of Java) made and his announcement of “Amazon Corretto” during the keynotes of the conference.

Amazon Corretto, is a no-cost, multiplatform, production-ready distribution of OpenJDK from Amazon.
[https://aws.amazon.com/blogs/opensource/amazon-corretto-no-cost-distribution-openjdk-long-term-support]

Devoxx events in 2019

If you are interested in attending a future Devoxx event, please see https://beta.devoxx.com/#/ for more information. Here are some examples:

Concluding

Attending Devoxx in Belgium, was a great week for me in gaining new insights and knowledge on a variety of subjects, such as Java, Docker, Kubernetes, Apache Kafka, Microservices and IntelliJ IDEA.

The post Devoxx Belgium 2018, From Developers For Developers appeared first on AMIS Oracle and Java Blog.


Quickly developing and hosting your friendly neighbourhood application – for free using … Oracle APEX

$
0
0

My challenge:

my wife wants to set up a system for some of our friends and relatives to let each other know about appliances and things they can share as well as objects they offer up [because they plan to get rid of them]. The specifications are fairly straightforward: users are known and should be authenticated, the app should be accessible through laptop and mobile device, offers can be placed along with at least one image of the object and anyone interested in an offer should have the opportunity to ask questions and indicate their interest.

Initially, I looked at Google Forms and a little later at Google Sheets (and custom sheets with Google WebScript for slightly more advanced stuff). I dug around a little, watched a pretty informative video and then decided to go a different route. Instead of no code I was quickly entering into low code at best (and medium code as my wife’s requirements start piling up, as I am sure they will). So if low code what I am facing with the option of getting more and more into coding, I am looking at a very small – in terms of everything: functionality, traffic, availability, performance – application and I will have to do some coding, isn’t Oracle APEX and the managed apex.oracle.com environment a very fitting way to go? Where coding is done in SQL, PL/SQL and JavaScript – technologies I master pretty well. Where all my other non-functional requirements probably can be taken care of rather easily. And where a managed environment for these small scale application is offered for free.

My biggest questions at this stage were:

  • would it be easy to implement authentication on my application, to ensure only the people we know and love (and not even all of them) can enter the application
  • would it be easy to create a user interface that is intuitive enough for some of the not so computer savvy users I am expecting
  • would it be doable for me – with not much APEX experience to speak of – to quickly create at least an MVP with which my wife could get going

Let’s find out.

My steps will be:

  • get my own Workspace set up at https://apex.oracle.com
  • create the database tables to underpin my application (at least for the MVP)
  • create an application and a first page to create and query data
  • implement authentication and manage some pilot users
  • publish the application and go live for some initial poking around by the pilot users

1. Getting Started

Getting started is easy enough – at https://apex.oracle.com/en/learn/getting-started/

After requesting my own workspace

image

it took no longer than a few minutes for the email with login details to arrive

2. create the database tables to underpin my application (at least for the MVP)

Using the Quick SQL facility in the APEX development environment, I composed my table definitions.

image

Via Review and Run, I can quickly execute the DDL in the Oracle database schema associated with my APEX workspace.

With the Object Browser, I can inspect the tables and constraints and start editing data:

image

3. Create an application and a first page to create and query data

From the App Builder tab, I click on Create New App

image

image

image

I provide the name, select a few application options – including Authentication using Application Express Accounts – and click Create Application:

image

This will create an APEX application with a Home and Administration page. This application will be accessible to any APEX user I set up. This user does not need to have an Oracle account, just use the credentials I provide them with.

To add pages to the application that are specific to the tables I have created, click Create Page:

image

Now I get to specify the style of the page – Form and Form on Table, :

image

image

And next I get to define the name of the page, the default navigation flow from this new page and whether a breadcrumb should be added:

image

more interesting are the menu entry for the page

image

and the table on which the form is to be based: table OFFERS  – as well as the columns to be presented in the page:

image

Finally I specify the primary key column(s) for the OFFERS records:

image

And the page is created.

If I want to, I can fine tune the page and the page items – but I do not have to. At this point I have a working page that I can test run right away, by clicking the run button:

image

The page appears, as it will to end users:

image

All fields can be entered, the record can be created and an image can be uploaded (one of the table columns is of type BLOB).

4. Publish the application to external users – implement authentication and manage some pilot users

I thought this would be a big step. To bring the application live – surely that would be a very involved operation? It turns out that it is not. I have created a user account for my wife:

image

and specified – very clearly – that she is neither and administrator nor a developer:

image

Upon creating the application, I set the authentication scheme to Application Express Accounts. After creating user accounts, I can send the application URL to these users along with their user name and password and they can get off to some initial poking around.

5. Initial User Experiences

After just a little tweaking and tuning, this is what my wife experiences on her mobile device:

image

Mobile friendly sign in – out of the box. After signing in:

image

And submit a new offer – register an object that is available for sharing and take over:

SNAGHTMLdb29db9

After submitting the offer

image

it can be reviewed:

image

and inspected in detail:

SNAGHTMLdb859dd

Note: this is just the MVP – a minimal set of features. Achieved in hardly any time at all.

Status at this point:

  • cloud based data persistence (in relational database)
  • web and mobile application experience
  • authentication for centrally managed users (and no one else)
  • record creation, manipulation and review
  • upload images

This covers the core requirements from my wife. And I feel fairly confident that I can address most of her follow requirements as well.

Some very nice features in APEX I would like to mention:

  • search through the entire application (for example to find that one text string that you want to have modified in the application)
  • define reusable lists of values
  • predefined page patterns (such as wizard, master-detail, interactive grid)
  • built in user feedback mechanism – and option to ‘turn feedback into todo item’
  • great look and feel on mobile, just out of the box – including ability to take and add photos as uploaded images
  • free – not just the product APEX but also the light weight, fully managed environment

Conclusion

If you want to quickly create an application for a small group of known users that you want to make available on desktop and mobile devices from anywhere in the world, in which data can be entered, browsed and manipulated, and you want to spend no time on installation, administration and other non-functionality related stuff and you want it to look nice, then you probably should consider Oracle APEX as a solution. When Google Forms and SurveyMonkey are just too limited (by far) and low code platforms do not offer a freely available, managed  environment, when you prefer not to code but if you have to, SQL and PL/SQL are fine – then Oracle APEX can well be your friend.

I managed to get a simple four database table based application up and running – and accessed by my wife on her mobile phone in maybe three hours on a lazy Sunday afternoon – with no prior knowledge or skills regarding APEX. If I would have to do the same again, it would take less than one hour. From start to my wife and four of her friends live on a (mobile) web application where they create and share data.

APEX – for all your neighbourhood applications.

Resources

Request APEX workspace: at https://apex.oracle.com/en/learn/getting-started/ (this requires an Oracle account)

Current APEX documentation: https://apex.oracle.com/en/learn/documentation/

APEX Community Discussion Forum: https://community.oracle.com/community/groundbreakers/database/developer-tools/application_express 

Notes

Couple of things I had to find out:

– how to set a style on the image to have it contained within the page (set Advanced | Custom Attributes – style=”height:400px”

– how to derive a default value for a page item from one of the database tables using the user id for current user – see documentation on Session State Management, define default for page item of type PL/SQL Function Body, write a PL/SQL code block that returns value of the correct type; derive the value using the session state variable :APP_USER and perform a query against the MEMBERS table.

image

The post Quickly developing and hosting your friendly neighbourhood application – for free using … Oracle APEX appeared first on AMIS Oracle and Java Blog.

Some Notes on my small steps with Oracle APEX

$
0
0

In this article, I will share some of my personal findings and discoveries as I start out building a small APEX application. No experienced developer is likely to learn anything from this article – but first timers like me could perhaps benefit from my findings.

Some of the challenges I faced and managed to overcome:

  • Use CSS styling for images shown in a grid layout – managed them down from their original size to a proper format
  • Derive the default value for an item from a database table using the currently signed in user’s identification
  • Set a CSS style on an image in a form-layout (using Advanced | Custom Attributes)
  • Show a tooltip for cells showing one database column showing text derived from a different database column
  • Replace the edit icon (pencil) in interactive report
  • Dynamically derive page title – using data in page items
  • Dynamically determine to which Page a Button should Navigate

Derive Default Value for an Item – based on a Query leveraging the current user’s Id

Challenge: I want to set the default value of an item when the user creates a new record and I want this default value to be derived using a SQL query that filters based on the user identity of the current user. I have a MEMBERS table that contains entries corresponding to all my application’s end users. This table contains a column USERNAME with values corresponding to the usernames with which my users log in.

Define default for page item of type PL/SQL Function Body, write a PL/SQL code block that returns value of the correct type. The bind variable :APP_USER can be used in the PL/SQL code to refer to the username of the current session’s user.. In the PL/SQL code, perform a query against the MEMBERS table, filter on username using :APP_USER and return whatever column value is needed to be used as default value.

image

Resources:

documentation on Session State Management,

Set CSS properties to impact the contents of Cells in a Table (in an interactive report template)

Challenge: I have a table that contains a column based on a BLOB column. I want the cells in this column to display an image based on the BLOB. However, the size of the image has to be contained. I can not directly define a CSS property for the column. So I need a workaround.

The steps:

  • define a CSS class at page level
  • define inline CSS at page level (for include this CSS in a file included in the application) to set the image size for images in table cells in the relevant column
  • define static id at column level

Define a page level CSS class:

image

Define Inline CSS at page level

image

Here I specify that within the global selector aanbod  that was specified at page level (and applies to all elements in the current page) all table cells whose header attribute has the value of afbeelding should have the CSS property height set to 80px for any images (img elements) they happen to contain.

Define the Static ID at column level:image

The result of this setting is that each cell (TD element) in the column has an attribute headers set to the value of the Static ID (afbeelding).

SNAGHTML14d41917

The result is that the image has the style property height set to 80px:

SNAGHTML14d5ecae

Some of the resources I used:

https://dsavenko.me/classic-report-interactive-report-interactive-grid-cell-style-based-on-data/

https://www.w3schools.com/css/css_attribute_selectors.asp

Define CSS property for an Image item (based on a BLOB column) in a Form layout

I want to display an image in the form layout. The item is based on a BLOB column. Some of the uploaded images are decently small, and others can be outrageously large. I want to have them all displayed in the same height of 400 pixels. I can achieve this by specifying a value for Custom Attributes in the Advanced section of the property palette for my image item. The value is set to style=”height:400px”

image

The result is a decently sized image:

image

Show a datadriven tooltip for cells in an Interactive Report

My interactive report’s table component contains a column called NAME. The underlying database table also contains a column called DESCRIPTION, a VARCHAR2(4000) field with additional descriptive content for each record. I would like the content of description to be shown as a tooltip for cells in the interactive report. When the mouse hovers over the name value, the description of the corresponding database record should be displayed as tooltip.

After some searching, it turns out that I can make use of the very powerful HTML Expression property. This property specifies for page items how they are to be rendered in the web page. In HTML expressions, I can use HTML tags, include calls to JavaScript functions, insert CSS and use placeholders for column values – for example #DESCRIPTION# and #NAME#, as shown below.

image

The result of this HTML expression is shown below:

image

Note: the #DESCRIPTION# placeholder can be used because the DESCRIPTION  page item exists – as hidden column.

Resources

https://community.oracle.com/thread/3537281

Docs: Managing Interactive Report Column Attributes https://docs.oracle.com/database/apex-18.1/HTMDB/managing-interactive-report-column-attributes.htm#HTMDB-GUID-D3275F83-328B-477F-AA35-8BC7DCD3DC1F 

Replace Edit Icon in Interactive Report (no pencil  but something else)

Instead of the standard pencil icon, I would like to use a different icon to drill down to the details page. Using a blog article, I have learned that I should edit the attributes of the Interactive Report template instance in the current page. The Link Icon attribute defines the icon that is shown. It is easily changed to a different icon.

The out of the box available icons are listed here: https://tedstruik-oracle.nl/ords/f?p=25384:1003:::NO::: 

image

The result looks like this:

image

Alternatively, Font Awesome’s collection of icons is available: https://fontawesome.com . An icon can be configured to be used as link icon can be configured as easily as <span class=”fa fa-bell”></span> 

See: https://www.apex-at-work.com/2016/07/interactive-report-with-font-awesome.html 

Dynamically Set Page Title

The title of the page should be derived dynamically, depending on the current page item values. I can use substitution variables in the title – referring to page or application level items. For example:

Details en Reactie voor Te Geef Aanbod  &P14_HANDOVER_OR_SHARE.

Note that the Page Item Reference starts with & and ends with . (a period).

I create a new page item, based on a PL/SQL Function Body. In this body, I derive the proper value for the page title.

image

I then define the page title using a substitution variable that refers to this page item.

image

Live in the application it resolves beautifully:

image

https://stackoverflow.com/questions/24659852/change-page-title-based-on-item-in-oracle-apex-4-0

Dynamically determine to which Page a Button should Navigate

I am using a single edit details page that is drilled down to from two pages. When on this page the Cancel or Save button is clicked, it should navigate to original page – which is one of two options.

How can I instruct APEX to evaluate the target page at runtime, based on session level attribute or a page item.

Add a Dynamic Action to the Cancel button:

image

Add an Action under the Dynamic Action to specify what should happen: Submit Page. image

Finally create a processing branch under ‘After Submit’ to be executed whenever the submit page event has occurred:

image

The branch will perform navigation to the desired target page. It executed a PL/SQL Function block to determine the page it needs to navigate to – using APEX_PAGE.GET_URL, and using the page item HANDOVER_OR_SHARE that determines which page the navigation should take place to.

Resource:

How to create dynamic links in Oracle Application Express?

https://www.bi4all.pt/en/news/en-blog/how-to-create-dynamic-links-in-oracle-application-express/

The post Some Notes on my small steps with Oracle APEX appeared first on AMIS Oracle and Java Blog.

Some neat APEX tricks while building a Session Like application for our Conference

$
0
0

AMIS is part of the Conclusion ecosystem of over 20 companies, each with their own specialty and identity. Several times per year, we organize Conclusion on Stage – a conference that spans across the ecosystem. Presenters from most companies under the Conclusion umbrella submit session proposals. Close to 30 sessions are stages in five rooms and close to 200 participants attend these sessions.

My challenge in this article: I want to learn how many of my colleagues have a special interest in each of the proposed sessions. In order to prepare the right schedule for the event, I need to know which sessions are most popular and should have the largest rooms and not be scheduled at the same time. My solution is to create a simple web application that presents details on all submitted sessions and allows each colleague to indicate their likes for these sessions. I select APEX as my platform and apex.oracle.com as the hosted environment where it will run.

In this article, I will describe the steps I went through in order to create and publish this little application – and how I dealt with some of the challenges.

The following figure – complex as it may appear – provides a summary of the final result and how the users negotiates the flow through it.

Steps:

  1. Create an APEX application from an Excel sheet that contains all sessions – title, speakers, description, tags, duration and more
    This will create a SESSIONS database table. I manually create – through Quick SQL – tables COLLEAGUES and LIKES. I also create View V_SESSION_LIKES to join LIKES with SESSIONS. More on this view a little later
  2. Create a Form page on table COLLEAGUES. Set the default value for the P1_ID item to SYS_GUID()`
  3. When the Create button is clicked,
  4. the current value of P1_ID is saved to the Session State
  5. the new Colleague record is inserted into the Database Table COLLEAGUES
  6. navigation takes place to a second page (based on a link that also copies the value of P1_ID into P4_COL_ID)
    Page 4 is of type List View, its contents derived from the View V_SESSION_LIKES and used in a custom markup; note that this page also contains a hidden item called P4_LIKE_ID
  7. the item P4_COL_ID is set from P1_ID
  8. the value of item P4_COL_ID is stored in APEX session state
  9. in an after header, pre region process, a PL/SQL block is executed that generates LIKES records for the current colleague (from P4_COL_ID) and all Sessions
  10. when the region of type List View is rendered, its queries from view V_SESSION_LIKES; the where clause of this query filters on the LIKES records that are associated with the colleague record with the id value equal to the value stored in P4_COL_ID and retrieved using the APEX_UTIL.GET_SESSION_STATE function
    the view returns the name of an image to render – depending on whether the state of a LIKES record is Y or not (Y being a like)
  11. An onClick event handler is associated with the like icon; it invokes a JavaScript function. This function toggles the image – from liked to not particularly liked or vice versa. It also sets the value of the id for the clicked LIKES record in the P4_LIKE_ID item
  12. P4_LIKE_ID is a hidden item
  13. A Dynamic Action is associated with item P4_LIKE_ID; this action is triggered when the value of P4_LIKE_ID is changed. In our case, that means that when the like image is clicked, the invoked JavaScript function uses the APEX JavaScript library to set the item’s value apex.item().setValue() . This in turn triggers an AJAX call from the browser to the server.
  14. The dynamic action executes a PL/SQL function that updates a LIKES record in the database table. Depending on the state of the like icon, the record is Liked or Unliked.

image

This may see complicated. However, it is not all that hard. And perhaps I could have done all this in a simpler way – that I have not yet uncovered.

Let’s take a closer look at some of the steps (note: some code is found here in GitHub )

1. Create an APEX application from an Excel sheet

This really is not a challenge at all. Save the Excel sheet as a CSV file. Then just use the wizard.

image

This will create a SESSIONS database table.

I manually create – through Quick SQL – tables COLLEAGUES and LIKES. I also create View V_SESSION_LIKES to join LIKES with SESSIONS.

image

Look at the somewhat contrived way to sorting the records in a random order – to have every colleague see the sessions in a different order. Also not the where clause reference to the APEX_UTIL.GET_SESSION_STATE function used to filter the LIKES records by the current colleague’s id. Finally, note that like_icon expression – a CASE expression to return the image name depending on the like status.

2. Create a Form page on table COLLEAGUES.

Again, this is not a challenge. Just a simple step.

Set the default value for the P1_ID item to SYS_GUID(). I have also added a heading and a logo.

image

The form looks like this:

image

3. When the Create button is clicked and 4. the current value of P1_ID is saved to the Session State and 5. the new Colleague record is inserted into the Database Table COLLEAGUES

image

6. Navigation takes place to a second page (based on a link that also copies the value of P1_ID into P4_COL_ID)

The Processes tab contains the entry to causes the navigation to Page 4 when the Create button is click on. The value of item P1_ID is copied to P4_COLLEGA_ID (in Page 4)

SNAGHTML2950b3b

Page 4 is of type List View, its contents derived from the View V_SESSION_LIKES and used in a custom markup; note that this page also contains a hidden item called P4_LIKE_ID

The custom markup for the List View region is not trivial at all:

image

The result looks like this:

image

7. the item P4_COL_ID is set from P1_ID and 8. stored in APEX session state and 9. used to generate LIKES records

in an after header, pre region process, a PL/SQL block is executed that generates LIKES records for the current colleague (from P4_COL_ID) and all Sessions

image

10. when the region of type List View is rendered, its queries from view V_SESSION_LIKES; the where clause of this query filters on the LIKES records that are associated with the colleague record with the id value equal to the value stored in P4_COL_ID and retrieved using the APEX_UTIL.GET_SESSION_STATE function

the view returns the name of an image to render – depending on whether the state of a LIKES record is Y or not (Y being a like);

The HTML mark up for rendering the Like image is this:  <span aria-hidden=”true” class=”&LIKE_ICON.” onclick=”thumbimage(‘&THE_LIKE.’, ‘&ID.’, this) “></span>. Note the substitution variables for LIKE_ICON, THE_LIKE and ID – all replaced by APEX upon rendering with the values queried from the V_SESSION_LIKES view.

11. An onClick event handler is associated with the like icon – calling function thumbimage

The onClick handler invokes a JavaScript function. This function toggles the image – from liked to not particularly liked or vice versa. It also sets the value of the id for the clicked LIKES record in the P4_LIKE_ID item

image

Note how the APEX JavaScript library is used to programmatically set the value of item P4_LIKE_ID – using the id value from the session record that was clicked. Note how an n(ot paticularly like) or l(ike) is prepended to the id value.

12. P4_LIKE_ID is a hidden item and 13. A Dynamic Action is associated with it

item P4_LIKE_ID; this action is triggered when the value of P4_LIKE_ID is changed. In our case, that means that when the like image is clicked, the invoked JavaScript function uses the APEX JavaScript library to set the item’s value apex.item().setValue() . This in turn triggers an AJAX call from the browser to the server.

image

14. The dynamic action executes a PL/SQL function that updates a LIKES record in the database table

Depending on the state of the like icon – reflected in the first character in the string passed in P4_LIKE_ID – the record is Liked or Unliked.

image

Through the Items to Submit property, I have specified that the new value assigned in the browser to P4_LIKE_ID should be passed to this PL/SQL code – though no other items’ values are required.

Summary

I set both pages to be accessible by unauthenticated users. I can now share the URL to all my colleagues and have them indicate the sessions they like. I have created my own simple report to check on the best liked sessions.

image

It is quite satisfying to find how well the application looks and functions on mobile devices. Without any effort on my part.

Resources

Icons – including Awesome – available with APEX:  https://apex.oracle.com/pls/apex/f?p=42:icons

In the client set value for hidden page item when the event of interest occurs

https://apex.oracle.com/pls/apex/germancommunities/apexcommunity/tipp/6341/index-en.html

Docs on JavaScript in APEX: https://docs.oracle.com/en/database/oracle/application-express/18.2/aexjs/toc.html 

JQuery in APEX: https://oracle-patches.com/en/web/3405-jquery-fundamentals-for-apex-pl-sql-programmers 

List View with custom layout and embedded JavaScript – https://krutten.blogspot.com/2018/06/46-list-view-in-apex-181.html

Client Side item manipulation – https://apex.oracle.com/pls/apex/germancommunities/apexcommunity/tipp/6341/index-en.html 

Application Context and APEX – https://jeffkemponoracle.com/2015/11/apex-5-application-context/

The post Some neat APEX tricks while building a Session Like application for our Conference appeared first on AMIS Oracle and Java Blog.

Building a Conference Session Agenda with Oracle APEX – notes on Pivot, Modal Popup and jQuery

$
0
0

AMIS is part of the Conclusion ecosystem of over 20 companies, each with their own specialty and identity. Several times per year, we organize Conclusion on Stage – a conference that spans across the ecosystem. Presenters from most companies under the Conclusion umbrella submit session proposals. Close to 30 sessions are staged in five rooms and close to 200 participants attend these sessions.

I want to provide a Conference Agenda App that offers the audience easy access to the current agenda with information about which sessions take place when and where as well as details about the sessions. The would have to look like this:

image

and when the user clicks or taps on one of the cells, a popup should appear with the details about the session:

image

In a previous article, I have told about my first steps around the APEX app for the Conclusion on Stage conference.

In this article I will build on that story and show the steps I had to take to implement this app. The app is running on apex.oracle.com hosted and publicly accessible.

The steps:

  • create tables sessions, rooms and slots; foreign keys from sessions to both rooms and slots
  • create an APEX interactive grid for editing the session details (the initial data was loaded from an Excel document); the room and slot assignments are recorded through this grid – in the database
  • create a database view that uses PIVOT to turn the five rows for one time slot (we have five rooms for the conference) into a single row with five columns (for each of the rooms and sessions taking place at that time)
  • create an APEX interactive report agenda on top of that view – showing the sessions in a table-grid
    • set the column headings to the names of the rooms
  • create a modal page with a form on table region to show the de details from a single session
  • create a link from the agenda to the modal page, passing the session id for the cell to the popup with the session details
  • apply background color to all sessions starting at the same time
    • set a CSS class per slot
    • define the CSS styles for these classes, resulting in a color per time
      slot
    • set title attribute for the cells – to show the session abstract when
      hovering over a cell
    • add JavaScript to apply the CSS class defined on the cell contents to the
      cell’s parent TD element ; for sessions that last for more than a single slot,
      also apply the class to the corresponding TD element in the next row

Sources for the application described in this article are on GitHub: https://github.com/lucasjellema/conference-management 

1. Create tables sessions, rooms and slots

Three database tables are used in this application: one to hold all session details such as a speakers, title, abstract, tags, duration (loaded from an Excel file) and two for rooms and slots respectively. The slots table holds the start_time for each slot, the rooms table has the names for the rooms and a seq column that determines the sort order (based on the physical location of the rooms). The DDL file is nothing special – you can find it in the GitHub repo: https://github.com/lucasjellema/conference-management/blob/master/db.ddl

2. Create an APEX interactive grid for editing the session details

(the initial data was loaded from an Excel document); the room and slot assignments are recorded through this grid – in the database

Even though this is a very powerful page for inspecting and manipulating data, in terms of APEX development effort it is almost a piece of cake.

image

The interactive grid is created on the sessions table. I have decided which columns to show – in which order = and which ones to hide. I have specified the columns for Room and Session slot to be of type selectlist. I have specified how the values for these select lists are retrieved, using database queries:

image

and the Shared LOV component:

image

Using the Session Manager, I can define all session details including their slot and room allocations.

3. Create a view to produce rows per timeslots with columns per room

The Agenda overview I am looking for shows the data in a grid. Each cell in the grid corresponds to a session, each column to a room and each row to a slot. The sessions table has a row per session. My challenge: turn five rows into a single row with five columns.

This is where the SQL PIVOT operator comes into play. I use to pivot my five rows (grouped by time slot) into one row with columns per room. The SQL statement below joins sessions with rooms and slots, then pivots the session id values over the rooms. The other columns (start_slot and starttime) are used for an implicit group by. The session records are pivoted in groups with the same start_slot and starttime – maximum five rows. Their session id values are assigned to the column for their corresponding room assignment

select *
from  (select s.id sessie_id, start_slot, r.seq room , to_char(sl.start_time,’HH24:MI’) starttime from sessions s join rooms r on (s.room=r.id) join slots sl on (s.start_slot = sl.id ))
pivot  (  max( sessie_id) sessie_id
           for room in ( 1 as r1,2 as r2,3 as r3,4 as r4,5 as r5 )
        )

image

With this query at its core, I have created a Database View sessie_rooster (Session Schedule) that produces rows per timeslot with columns for the id, titel & speakers, slot_count and abstract for each session in that timeslot.

image

This view clearly is not meant for data manipulation – not without Instead Of trigger anyhow. It is a great way to underpin a visualization of the agenda data.

4. create an APEX interactive report agenda on top of that view – showing the sessions in a table-grid

Creating a new page with an interactive report on a table or view is very straightforward in APEX.

image

I have set most columns to hidden – only STARTTIME and the five R#_SESSIE columns are visible.

The column headings for those five columns are defined using the names of the five rooms where sessions take place, based on data in table ROOMS but hard coded into the application. It feels a little unfortunate but unavoidable.

image

This is what the agenda looks like to the end user at this stage:

image

I want the users to be able to click or tap on a cell and then bring up a popup window that provides details on the session.

5. create a modal page with a form on table region to show the de details from a single session

I create a new page – with a form on a table as a modal dialog. The page is public (no authentication required).

SNAGHTML3d35346

The wizard is straightforward enough:

image

The page does a fetch of a single row from table sessies, based on the value of item P5_ID. We have to make sure that this item is set with right session id when this modal page is loaded from a cell in the agenda.

image

6. create a link from the agenda to the modal page, passing the session id for the cell to the popup with the session details

It is remarkably simple to turn the cells in the report into links to the modal window popup. Change the type to Link for each of the five items R#_SESSIE

image

Define the Link details for each of these five items:

image

The page to link to is the modal page, page 5 in my application. The page item P5_ID in that page should be populated with the id of the session whose cell is clicked. This value is referred to with #R1_ID# for the R1_SESSIE item, and #R<number>_ID# for all items.

With this definition in place – and this is really all it takes – I can run the agenda again, and open the popup:

image 

7. apply background color to all sessions starting at the same time

I am not a UX designer so maybe it is not a very good idea. But I thought it would be nice to have all sessions that start at the same time share a visual attribute – such as a background color. To that end, I define the link attributes for each of the five link items – using CSS classes per time slot:

image

The link will have a class set of slot_1, slot_2,… slot_9 – depending on the value of the start slot item. A second class that is assigned is slotcount_1, slotcount_2,… – depending on the duration in slots for the session. Finally, the title attribute is set, using the expression #R1_ABSTRACT#, #R2_ABSTRACT#, … , referring to the current abstract value. This will result in a the abstract of a session being displayed in a hover text.

The CSS classes have to be associated with specific style characteristics. This is done in the CSS | Inline Styles property of the Page:

image

With these definitions in place, I can run the Agenda once more:

image

Colors for all slots, and a hover text with the session abstract:

image

It is a good first step. However, I would like the entire cell to have the background color, not just the text. So instead of the <a> element rendered for the link, I have to get to the parent TD element and apply the same CSS class to that element.

Using a little JavaScript to apply the CSS class defined on the cell contents to the cell’s parent TD element, defined at Page level, this is quite simple:

image

Using jQuery to look for all elements with a certain slot_<slot sequence>  class and then set that same class on their parent element, I quickly have the result I wanted:

imageHowever, I want a little bit more. For sessions that last for more than a single slot, also apply the class to the corresponding TD element in the next row. Sessions that are longer than one slot have a slotcount value of 2 or 3 and a corresponding class assigned: slotcount_2 or slotcount_3.

A little additional JavaScript allows me to apply the slot_<slot sequence> class to the TD element right below the one holding the session:

image

This is the end result – that I will consider my MVP:

image

Resources

Some of the resources I have used for understanding APEX mechanisms and other trickery:

My own two articles with previous steps on the Conference Manager:

All Things SQL – Chris Saxon – How to Convert Rows to Columns and Back Again with SQL (Aka PIVOT and UNPIVOT) – https://blogs.oracle.com/sql/how-to-convert-rows-to-columns-and-back-again-with-sql-aka-pivot-and-unpivot 

APEX Docs – Application Express App Builder User’s Guide 9.6.2 Managing Interactive Report Attributes – https://docs.oracle.com/database/apex-18.1/HTMDB/managing-interactive-report-attributes.htm#HTMDB-GUID-2CC35334-D1D8-459F-849A-6849CEB5820A 

Jeff Kemp on Oracle – articles on JavaScript: https://jeffkemponoracle.com/tag/javascript/

APEX Docs – JavaScript Reference (18.2) – https://docs.oracle.com/en/database/oracle/application-express/18.2/aexjs/toc.html

jQuery Fundamentals for APEX PL/SQL programmers – https://oracle-patches.com/en/web/3405-jquery-fundamentals-for-apex-pl-sql-programmers

Stack Overflow Oracle APEX format a cell based on users input  – https://stackoverflow.com/questions/17784633/oracle-apex-format-a-cell-based-on-users-input

Modal Dialog Dynamic Titlehttps://community.oracle.com/thread/4156461

W3 Schools CSS color picker – https://www.w3schools.com/colors/colors_picker.asp

The post Building a Conference Session Agenda with Oracle APEX – notes on Pivot, Modal Popup and jQuery appeared first on AMIS Oracle and Java Blog.

JavaScript Generators – How functions return a stream of results, yielding one result at a time

$
0
0

It was through inspecting some Python code that relied quite heavily that I suddenly realized the beauty of the ES6 concept of generators and the yield keyword. A generator function does not return its result all at once but instead an iterator that can be read from, one value at a time. A for loop can be constructed that iterates over the results from a function – but the function does not have to create its entire result set before the for loop can start doing any work. The generator yields results when asked for them, lazily doing the work it was asked to do. And when no more results are required, no more results are produced and yielded.

Note: for those of you who are into PL/SQL and know pipelined table functions, this must sound somewhat familiar.

Let’s take a look at some interesting examples.

This code contains a for loop over the result from a function call. The for…of syntax (see: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for…of ) allows iterating over Array, Map, Set, Array-like objects such as arguments, NodeList and user-defined iterables. It also allows iteration over the result of a generator function – a pipelined or step-by-step returned result.

//print the alphabet
for (let ch of alphabet()) {
    console.log(ch)
}

In this case, the function alphabet() is defined like this:

function* alphabet() {
    var n = 0
    while (n &amp;amp;lt; 26) {
      yield String.fromCharCode(97 + n++);
    }
  }

Nothing very special. The most eye catching elements are the * postfix in the function keyword and the yield call in the while loop. Simply put: the asterisk turns the function into a generator function (a function whose result can be iterated over by the caller) and the yield is the action that returns a result. One element out of the potentially many that this function could return (if the caller keeps asking for more results, that is). Note that the generator function will run until the yield – not beyond that call. It will only continue to run when the caller asks for the next value.

Running this code produces the alphabet. Duh… image

The generator does not make any promises regarding the speed with which results are returned. It may be the case that producing the next result takes a while.

The next code fragment adds logging with timestamps, and adds a deliberate delay of 500 ms at the beginning of an iteration of the for loop. Right after getting the next value from the iterator (aka the generator function), a sleep of half a second is performed. This shows up in the logging as a gap between the yield action and the corresponding int-to-character translation and printing to the output. It should be clear that there is also a gap of about 500 ms between the iterations inside the generator function – caused by the lazy retrieval of values from the generator.

const sleep = (milliseconds) =&amp;amp;gt; {
    return new Promise(resolve =&amp;amp;gt; setTimeout(resolve, milliseconds))
}

const lg = (msg) =&amp;amp;gt; {
    const d = new Date()
    console.log(`${d.getSeconds()}:${d.getMilliseconds()}  - ${msg}`)
} 

function* alphabet() {
    var n = 0
    while (n &amp;amp;lt; 26) {
        lg(`.. yield ${n}`)
        yield String.fromCharCode(97 + n++);
    }
}

const doSomething = async () =&amp;amp;gt; {
    //print the alphabet
    for (let ch of alphabet()) {
        await sleep(500)
        lg(ch)
    }// for of alphabet()
}

doSomething()

Note how the combination of the await keyword in the async function doSomething with the sleep function that returns a Promise with a delayed result because of the setTimeOut allows me to have a sleep construct in a single threaded runtime. See for example this article for some background: https://flaviocopes.com/javascript-sleep/ 

image

 

If the action performed by the generator function to produce a next value takes considerable time, it may make sense to start producing the next value (asynchronously) just prior to yielding the current value. That means when the consumer asks for the next value, some (and perhaps all) work already will have been done to produce it.

In the next code snippet, the delay is inside the generator function – a very common situation. The generator function may have to read from a database or invoke an external API and constructing the result may take some time. In this code, the delay is simply  again the sleep function. However, there are some very relevant changes compared to the previous code snippet – changes that have only just been enabled in ES 2018.

The generator function is now async – and it has to be because it invokes the asynchronous sleep function. As a consequence, the await keyword has been added to the for..of loop that retrieves values from the now async generator function. These two changes were not supported until recently. See this article for more background on ES2018 Asynchronous Iteration: https://medium.com/information-and-technology/ecmascript-2018-asynchronous-iteration-ec0b6a3a294a

View the code on Gist.

imageWhat we can see is that the logging for yield and the corresponding alphabet character are virtually at the same time. The time gap is now between the logging of the letter and the next yield action in the generator- as it will frequently be generators that call out to external sources or otherwise have a hard time preparing their values. Fortunately, we can now have the generators yield values one by one – and start working on the yielded values long before all of them have been prepared.

Pipelining

An example of pipelining generator functions together is shown here: a text fragment is reduced to individual sentences that are then reduced to words. The occurrences of words are counted. And yes, map and reduce operations on arrays can do a similar job. Here I only use this example to demonstrate the syntax of the generator function and the yield action.

Suppose the action performed by function sentences() would be expensive, performance wise, then at least we can count on (some not all) earlier results when using the generator approach compared to first preparing the full result set from sentences() before doing any work on these results.

const lg = (msg) =&amp;amp;gt; {
    const d = new Date()
    console.log(`${d.getSeconds()}:${d.getMilliseconds()}  - ${msg}`)
} 

function* sentences(text) {
    const s = text.match( /[^\.!\?]+[\.!\?]+|[^\.!\?]+$/g );
    for (let sentence of s)
       yield sentence;    
}//sentences

function* words(sentence) {
    const w = sentence.split(" ");
    for (let word of w)
       yield word;
}//words

var txt="One potato is growing in the field. Two potatoes are peeled by my mother in the field. Three potatoes are swimming in gravy on my plate.";
var dictionary ={}

for (sentence of sentences(txt)) 
  for (word of words(sentence)) {
    dictionary[word]= dictionary[word]?dictionary[word]+1:1
  }

lg(JSON.stringify(dictionary)) 

here is the not the very useful output

Note: this code would work – and words would be found and added to the dictionary even if the sentences generator would never actually complete – but would continue to produce a stream of sentences, like a constant Tweet consumer.

 

image

 

This article describes how running aggregates (such as moving averages) can be produces on endless streams of data, using aysnchronous iterators: https://dev.to/nestedsoftware/asynchronous-generators-and-pipelines-in-javascript–1h62 This is very powerful. And now relatively simple to implement in ES2018 (JavaScript, Node JS applications).

 

Resources

ES6 Generators and Async/Await https://medium.com/altcampus/es6-generators-and-async-await-e58e35e6834a

Asynchronous Generators and Pipelines in JavaScript https://dev.to/nestedsoftware/asynchronous-generators-and-pipelines-in-javascript–1h62

The post JavaScript Generators – How functions return a stream of results, yielding one result at a time appeared first on AMIS Oracle and Java Blog.

JavaScript Pipelining using Asynchronous Generators to implement Running Aggregates

$
0
0

image

As of ES 2018 (recent browsers or Node 10), JavaScript support asynchronous generators. Generators are functions that return a set of values, one value at a time. These values can be processed inside the code that invokes the generator immediately, as soon as they become available. There is no need to wait for the entire result set to be composed first. In cases where the result set is huge or even never ending, this is quite convenient. The result from one generator function can be fed into another function which can be a generator function too. And so on. This makes pipelining possible: a series of functions, all working together (and more or less in parallel) on taking each result through a series of processing steps.

With the fairly recent addition of asynchronous generators, the generator function producing the result set may be asynchronous – relying for example on Promises to gather its values.

In this article, I want to show something of the beauty of all of this. I will share a simple ES 2018/Node application that uses Promises to produce values asynchronously – triggered by time outs. Three Promises represent three temperature sensors; in this case the values are simply generated. However, these Promises could just as well read values from an external source or consume incoming events. Each Promise when resolved produces a sensor readout. The promise is wrapped in a promise that writes the `sensor value to temporary store (latestValue) and removes itself from the sensorPool – the set of promises function sensorValues() is waiting on using Promise.race([…sensorPool])

image

In asynchronous generator function sensorValues() we wait in a endless loop for one or sensorPromises to resolve (Promise.race resolves to first of the set of promises to resolve). When that happens, the latestValue – written when the sensor promise resolved – is yielded.

Another asynchronous generator function – runningSensorAverages – is triggered by the yield from sensorValues (in the loop for await (sensorReading of sensorReadings)). The value yielded added to the values collection for the current sensor in the sensors map. The value of ticks is increased; ticks counts the number of values received since the last calculation of the running aggregate. If the value of ticks equals the value of period (the parameter that specifies after how many values a new aggregate should be calculated), then a new aggregate is calculated, using the last windowSize values in the values collection for the current sensor. The value calculated is yielded (and ticks is reset).

The yielded running aggregate is received in function doIt(). This function writes the yielded value to the console – from another for await loop.

The result looks like this:

image

The pipelining nature of this application is best captured by this line:

for await (runningAverage of runningSensorAverages(filterOutliersFromSensorReadings( sensorValues()), 15, 10)) {..}

The streaming result from sensorValues() is piped – one reading at a tine – to the filter function and the output from that function to runningSensorAverages whose output appears as subsequent values in the for await loop.

 

Adding Time Windowed Aggregates

While we are at it, let’s add Time Windowed aggregates: averages produced every X seconds.

The implementation is done using a cache – a temporary store for the sensor readings that is written by runningSensorAverages(). Function timeWindowedAggregates() is triggered by a time out after a period specified by parameter timeWindow. When the function ‘wakes up’ , it reads the current contents from the cache, calculates and yields the averages.

image

Function doIt2() contains a loop over the generator timeWindowedAggregates():  await for (timedWindowAggregate of timeWindowedAggregates(6000)) that prints the averages to the console.

The combined output looks like this:

image

Note that all timed window averages are produced at the same time (over different numbers of readings between the sensors) and the running aggregates are produced at different times (over the same numbers of readings).

The extended code base:

 

Resources

Iterate partial results of Promise.all – https://agentcooper.io/iterate-promise-all/

JavaScript Arrays — Finding The Minimum, Maximum, Sum, & Average Values – https://codeburst.io/javascript-arrays-finding-the-minimum-maximum-sum-average-values-f02f1b0ce332

Moving Average (Wikipedia) – https://en.wikipedia.org/wiki/Moving_average

How to make your JavaScript functions sleep – https://flaviocopes.com/javascript-sleep/

Javascript – Generator-Yield/Next & Async-Await – https://codeburst.io/javascript-generator-yield-next-async-await-e428b0cb52e4

Asynchronous Generators and Pipelines in JavaScript – https://dev.to/nestedsoftware/asynchronous-generators-and-pipelines-in-javascript–1h62

Let’s experiment with functional generators and the pipeline operator in JavaScript – https://medium.freecodecamp.org/lets-experiment-with-functional-generators-and-the-pipeline-operator-in-javascript-520364f97448

The post JavaScript Pipelining using Asynchronous Generators to implement Running Aggregates appeared first on AMIS Oracle and Java Blog.

Performance! 3 reasons to stick to Java 8 for the moment

$
0
0

It is a smart thing to move to newer versions of Java! Support such as security updates and new features are just two of them but there are many more. Performance might be a reason to stick to Java 8 though. In this blog post I’ll show some results of performance tests I have conducted showing Java 11.0.3 has slower startup times and slightly slower throughput compared to Java 8u202 when using the same Java code. Native images (a GraalVM feature) have greatly reduced startup time and memory usage at the cost of throughput. You can only compile Java 8 byte-code to a native image though (at the moment).

Worse performance for Java 11

I’ve done extensive measures on performance of different JVMs and different microservice frameworks running on them. You can browse the scripts used here and view a presentation which describes the test set-up here. The test I did was create minimal implementations of several frameworks, compiled them to Java 8 and Java 11 byte-code respectively, ran them in Docker containers on a specific JVM and put load on them. I send a message, waited for a response and then I send the next message (all synchronous). I’ve done the same test, which ran for 15 minutes per framework/JVM combination (millions of requests), several times and the results are reproducible. I paid specific attention to make sure the load test and the JVM used separate resources. Also, I first wrote the measurements in memory before writing them to disk at the end of the test. I did this to be able to measure sub-millisecond differences between the JVMs and frameworks and not my disk performance.

Slightly worse throughput (response times)

For every framework (Akka, Vert.x, Spring Boot, WebFlux, Micronaut, Quarkus, Spring Fu) I noticed that the performance on Java 8 (tested on Azul Zing, Oracle JDK, OpenJDK and OpenJ9) was slightly worse than on Java 11 (though less than a tenth of a millisecond). For OpenJ9 it is beneficial to go to Java 11 though when running Akka or Spring Boot.

This could be related to the different garbage collection algorithm which is used by default. For Java 8 this is Parallel GC while for Java 11 this is G1 GC. G1 GC with 2Gb of heap performs slightly worse during my test than Parallel GC as you can see in the below graph. When reducing the heap though, G1 GC starts to outperform Parallel GC.

I have not tried Java 11 with Parallel GC (-XX:+UseParallelGC) to confirm the drop in performance is caused by the GC algorithm. As you can see though the performance difference from Java 8 to Java 11 is consistent over the different JVMs (with exception of Akka, Spring Boot on OpenJ9) using different GC algorithms. For example Zing 11 performs slightly worse then Zing 8 and Zing did not make the change in the default GC algorithm. Thus it is likely that when using the same algorithm for OpenJDK and Oracle JDK 8 and 11, there will still be slightly worse performance for JDK 11.

Worse startup time

GraalVM native compilation only supports Java 8

GraalVM 19 is currently (1st of June 2019) only available in a Java 8 variant. The native compilation option is still in an early adopters phase at the moment, but, even though throughput is worse (see the above graphs, Substrate VM), start-up time is much better! Throughput might be worse because pre-compilation could prevent some runtime optimizations (I have not confirmed this). See for example the below graph of Quarkus startup time on different JVMs. Also it is said memory usage is better when using native images but I’ve not confirmed this yet by own measurements.

If you want to use native images, for the moment you are stuck on Java 8. A good use-case for native images can be serverless applications where startup time matters and memory footprint plays a role in determining how much the call costs you.

Finally

Java 11 specific features were not used

These tests were performed using the same code but compiled to Java 8 and Java 11 byte-code respectively to exclude effects of compilation (although OpenJDK was used to compile to Java 8 and 11 byte-code in all cases). Usually you would write different code (using new features) when writing it to run on Java 11. For example, the Java Platform Module System (JPMS) could have been implemented or maybe more efficient APIs could have been used. Thus potentially performance on Java 11 could be better than on Java 8.

Disclaimer

Use these results at your own risk! My test situation was specific and might differ from your own situation thus performing your own tests and base your decisions on those results is recommended. This blog post just indicates that moving from Java 8 to Java 11 might not improve performance and prevents you from using native compilation at the moment.

The post Performance! 3 reasons to stick to Java 8 for the moment appeared first on AMIS Oracle and Java Blog.


Graceful shutdown of forked workers in Python and JavaScript running in Docker containers

$
0
0

You might encounter a situation where you want to fork a script during execution. For example if the amount of forks is dependent on user input or another specific situation. I encountered such a situation in which I wanted to put load on a service using multiple concurrent processes. In addition, when running in a docker container, only the process with PID=1 receives a SIGTERM signal. If it has terminated, the worker processes receive a SIGKILL signal and are not allowed a graceful shutdown. In order to do a graceful shutdown of the worker processes, the main process needs to manage them and only exit after the worker processes have terminated gracefully. Why do you want processes to be terminated gracefully? In my case because I store performance data in memory (disk is too slow) and only write the data to disk when the test has completed.

This seems relatively straightforward, but there are some challenges. Also I implemented this in JavaScript running on Node and in Python. Python and JavaScript handle forking differently.

Docker and the main process

There are different ways to start a main process in a Dockerfile. A best practice (e.g. here) is to use ENTRYPOINT exec syntax which accepts a JSON array to specify a main executable and fixed parameters. A CMD can be used to give some default parameters. The ENTRYPOINT exec syntax can look like:

ENTRYPOINT ['/bin/sh']

This will start sh with PID=1. ENTRYPOINT also has a shell syntax. For example:

ENTRYPOINT /bin/sh

This does something totally different! It actually executes ‘/bin/sh -c /bin/sh’ in which the first /bin/sh has PID=1. The second /bin/sh will not receive a SIGTERM when ‘docker stop’ is called. Also CMD and RUN commands after the second ENTRYPOINT example will not be executed while they will in the first case. A benefit of using the shell variant is that variable substitution takes place. The below examples can be executed with the code put in a Dockerfile and the following command:

docker build -t test .
docker run test

Thus

FROM registry.fedoraproject.org/fedora-minimal
ENV greeting hello
ENTRYPOINT [ "/bin/echo", "$greeting" ]
CMD [ "and some more" ]

Will display ‘$greeting and some more’ while /bin/echo will have PID=1. While

FROM registry.fedoraproject.org/fedora-minimal
ENV greeting hello
ENTRYPOINT /bin/echo $greeting
CMD [ ' and some more' ]

will display ‘hello’ and you cannot be sure of the PID of /bin/echo

You can use arguments in a Dockerfile. If however you do not want to use the shell variant of ENTRYPOINT, here’s a trick you can use:

FROM azul/zulu-openjdk:8u202
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

You can use the argument in a copy statement and make sure the target file is always the same. This way you can use the same ENTRYPOINT line while running in this case an app.jar file which is determined by a supplied argument.

Forking and signal handling

Python

See the complete sample code here

In Python you need only two imports to do signal handling: os and signal. I can fork a process by calling

pid=os.fork()

What this does is split the code execution from that point forward with one important difference between master and worker process. The value of pid in the master is 0 while in the worker it has a value greater than 0. You can base logic on the value of pid which is specific to master or worker. Do not mistake the result of the pid variable with the result of os.getpid(). Both processes can have different os.getpid() values of greater than 0.

If you want the master to be able to signal the workers, you can save the pid of the workers in a variable in the master. You can register a signal handler using: signal.signal(signal.SIGINT, exit_signal_handler). In this case the function exit_signal_handler is called when SIGINT is received. You can kill a worker by doing os.kill(worker_pid, signal.SIGINT) in the cleanup procedure of the master. Do not forget to wait until the worker is finished with finished = os.waitpid(worker_pid, 0) or else the master might be finished before the worker causing the worker to be killed in a not so graceful matter.

JavaScript

See the complete sample code here

In JavaScript, forking might be a less obvious thing to do when comparing to Python since in JavaScript it is a good practice to code as much non-blocking as possible. The event loop and (Node.js) workers which pick up tasks, will take care of threading for you. It is a common misconception that JavaScript running on Node is single threaded. It is not; there are multiple worker threads handling tasks. Every fork in this example has its own thread pool and worker threads thus the total amount of threads JavaScript uses when forking is (much) higher than Python.

A drawback of counting on the workers which pull events from the event loop is that it is difficult to obtain fine grained control and thus predictability. If I put something on the event loop, I won’t have a guarantee about when it will be picked up. Also I can’t be sure the callback handler is called immediately after execution. In my case I needed that control so eventually gave up on Node. I did however create a similar implementation to the one above for Python.

In JavaScript you can use cluster and process to provide you with forking and signal handling capabilities

var cluster = require('cluster');
var process = require('process');

Forking can be done with:

cluster.fork();

This works differently though than with Python. A new process with a new PID is created. This new process though starts the code from the start with some differences. cluster.isMaster is false in the workers and the worker contains an array of workers: cluster.workers. This can be used to signal the workers and wait for them to have gracefully shutdown. Also do mind that master and workers do not share similar variable values since the worker is started as an entirely new process not splitting execution at the fork command like with Python..

Signal handling can be done like;

process.on('SIGTERM', (error, next) =&gt; {
    mylogger("INFO\t"+pid+"\tSIGTERM received");
    cleanup();
});

Signalling the workers can be done with:

for (const id in cluster.workers) {
      mylogger("INFO\t"+pid+"\tSending SIGINT to worker with id: "+String(id));
      cluster.workers[id].process.kill();
}

The command process.kill() on the worker waits until worker has gracefully shutdown. Do mind that in the cleanup function you need to do different things for the master and the workers. Also mind that the id of the worker in the master is not the PID. The PID is process.pid.

Putting it together

In order to put everything together and make sure when a docker stop is issued, even the workers get a chance at graceful shutdown, several things are needed;

  • The master needs to have PID=1 so it can receive the SIGTERM which is issued by docker stop. This can be achieved by using the ENTRYPOINT exec syntax
  • The master needs a signal handler for SIGTERM in order to respond and inform the workers
  • The master needs to know how to signal the workers (by pid for Python and by id for JavaScript). In JavaScript an array of workers is available by default. In Python you need to keep track yourself.
  • The master needs to signal the workers
  • The master needs to wait for the workers to finish with their graceful shutdown before exiting itself. Else the workers are still killed in a not so graceful manner. This works out of the box in JavaScript. In Python, it needs an explicit os.waitpid.
  • The workers need signal handlers to know when to initiate a graceful shutdown

You now know how to do all of this in Python and JavaScript with available sample code to experiment with. Have fun!

The post Graceful shutdown of forked workers in Python and JavaScript running in Docker containers appeared first on AMIS Oracle and Java Blog.

A transparent Spring Boot REST service to expose Oracle Database logic

$
0
0

Sometimes you have an Oracle database which contains a lot of logic and you want to expose specific logic as REST services. There are a variety of ways to do this. The most obvious one to consider might be Oracle REST Data Services. ORDS needs to be hosted on an application server and requires some specific configuration. It is quite powerful though (for example supports multiple authentication mechanisms like OAuth) and it is a product supported by Oracle. Another option might be using the database embedded PL/SQL gateway This gateway however is deprecated for APEX and difficult to tune (believe me, I know).

Sometimes there are specific requirements which make the above solutions not viable. For example if you do not want to install and manage an application server or have complex custom authentication logic implemented elsewhere which might be difficult to translate to ORDS or the embedded PL/SQL gateway. Also ORDS and the PL/SQL gateway are no container native solutions thus features like automatic scaling when load increases, might be difficult to implement.

You can consider creating your own custom service in for example Java. The problem here however is that it is often tightly coupled with the implementation. If for example parameters of a database procedure are mapped to Java objects or a translation from a view to JSON takes place in the service, there is often a tight coupling between the database code and the service.

In this blog post I’ll provide a solution for a transparent Spring Boot REST service which forwards everything it receives to the database for further processing without this tight coupling, only to to a generic database procedure to handle all REST requests. The general flow of the solution is as follows:

  • The service receives an HTTP request from a client
  • Service translates the HTTP request to an Oracle database REST_REQUEST_TYPE object type
  • Service calls the Oracle database over JDBC with this Object
  • The database processes the REST_REQUEST_TYPE and creates a REST_RESPONSE_TYPE Object
  • The database returns the REST_RESPONSE_TYPE Object to the service
  • The service translates the REST_RESPONSE_TYPE Object to an HTTP response
  • The HTTP response is returned to the client

How does it work?

What is a REST request? Well… REST is an architectural style. You’re also not talking about SOA or EDA requests are you? We’re talking about HTTP requests in this case but the method can be applied to other protocols like gRPC, JMS, Kafka if you like. This requires some changes to the code though.

First, if you want a transparent solution to forward requests to the database and return responses from the database, we first have to know what a request and a response is.

What is an HTTP request?

You can read on some basics of HTTP requests here. The following picture taken from previously mentioned link gives a nice summary;

An HTTP request consists of; 

  • A method. GET, POST, etc.
  • An URL
  • A list of HTTP headers such as Content-Type, Host, Accept. Security people like these because web-browsers tend to interpret them. See for example the OWASP Secure Headers Project
  • A body

What is an HTTP response

The below image has been taken from here.

Generally speaking an HTTP response consists of 

  • A status code
  • HTTP headers
  • A body

So what does the service do? 

It accepts HTTP requests and translates them to Oracle objects which describe the HTTP request. This is then used to call a database procedure over JDBC (protocol translation). Additionally, you can use the service to do things like securityrequest logging, service result caching, etc.

Oracle database

PL/SQL limitations

PL/SQL has some limitations to deal with. For example, you cannot define object types in package specifications. And you cannot create an associative array (for storing HTTP headers) inside an Oracle Object type.

How to work around these limitations

In order to deal with these limitations, an Oracle Object structure is a good choice. See here. In a body of a package, you can then use these types. Also they can be transported over JDBC. The service (of which you can view the code here) calls the procedure with the required parameters.

Java service

JDBC limitations

JDBC in general does not provide specific Oracle database functionality and datatypes. The Oracle JDBC driver in addition also has some limitations (read the FAQ): Oracle JDBC drivers do not support calling arguments or return values of the PL/SQL types TABLE (now known as indexed-by tables), RESULT SET, RECORD, or BOOLEAN. There are currently no plans to change this. Instead people are encouraged to use RefCursor, Oracle Collections and Structured Object Types. I decided to use Object types since they are easy to use in the database and allow nesting. Main challenges in the service was constructing the correct Objects.

How to run this example

Get a database

First of course have a running Oracle database. You can of course use an existing database in your application landscape or for testing purposes install one yourself. If you’re familiar with Vagrant, an easy way to get up and running quickly can be found here. If you’re not familiar with Vagrant, you can also install an Oracle database in a Docker image. For that you have two mechanisms. Build it yourself (see here). Or download it from Oracle’s container registry. If you do not care about having the database isolated from the rest of your system you can also install it outside VirtualBox/Docker. I recommend XE if you want to go this path since the other database versions require more steps to install.

Create a database user

First login as a system user and create a user which is going to contain the dispatcher.

create user testuser identified by Welcome01
grant dba,resource,connect to testuser

Of course in an enterprise environment, you want to be a bit more specific with your grants.

Create the database objects

CREATE OR REPLACE TYPE HTTP_HEADER_TYPE AS OBJECT
(
name VARCHAR2(255),
value VARCHAR2(2014)
);
/
CREATE OR REPLACE TYPE HTTP_HEADERS_TYPE AS TABLE OF HTTP_HEADER_TYPE;
/
CREATE OR REPLACE TYPE REST_REQUEST_TYPE AS OBJECT
(
HTTP_METHOD VARCHAR2(16),
HTTP_URL VARCHAR2(1024),
HTTP_HEADERS HTTP_HEADERS_TYPE,
HTTP_BODY CLOB
);
/
CREATE OR REPLACE TYPE REST_RESPONSE_TYPE AS OBJECT
(
HTTP_STATUSCODE NUMBER,
HTTP_HEADERS HTTP_HEADERS_TYPE,
HTTP_BODY CLOB
);

Create the database package

Below is a minimal example. Of course here you can write your own implementation as long as the specification remains the same your Java code does not require changing.

CREATE OR REPLACE PACKAGE gen_rest AS

    PROCEDURE dispatcher (
        p_request    IN    rest_request_type,
        p_response   OUT   rest_response_type
    );

END gen_rest;
/
CREATE OR REPLACE PACKAGE BODY gen_rest AS

    PROCEDURE dispatcher (
        p_request    IN    rest_request_type,
        p_response   OUT   rest_response_type
    ) AS
        l_httpheader    http_header_type;
        l_httpheaders   http_headers_type := http_headers_type();
    BEGIN
        l_httpheader := http_header_type('Content-Type', 'application/json');
        l_httpheaders.extend;
        l_httpheaders(l_httpheaders.count) := l_httpheader;
        p_response := rest_response_type(200, l_httpheaders, '{"response":"Hello World"}');
    END dispatcher;

END gen_rest;

Download and install the JDBC driver locally 

  • Download the driver from here
  • Make sure you have Apache Maven installed and present on the path. If you’re familiar with Chocolatey, use: ‘choco install maven’.
  • Install the JDBC driver to your local Maven repository
    mvn install:install-file -Dfile=”ojdbc8.jar” -DgroupId=”com.oracle” -DartifactId=ojdbc8 -Dversion=”19.3″ -Dpackaging=jar

Download and run the Java service

You can find the code here. The actual Java code code consists of two classes and a configuration file. The configuration file, application.properties contains information required by the Hikari connection pool to be able to create connections. This is also the file you need to update when the database has a different service name or hostname.

The service itself is a Spring Boot service. After you have downloaded the code you can just run it like any old Spring Boot service.

Go to the folder where the pom.xml is located

mvn clean package
java -jar .\target\RestService-1.0-SNAPSHOT.jar

Now you can open your browser and go to http://localhost:8080/api/v1/blabla (or any URL after /v1/)

Finally

Considerations

This setup has several benefits;

  • There is only a single location which has business logic
    Business logic it is located in the database and not in the service. You might argue that is not the location where this logic should be, however in my opinion better in a single location than distributed over two locations. If the current situation is that the database contains the logic, it is often easiest to keep it there. In the long term however, this causes a vendor lock-in.
  • A custom service is flexible
    • The Java service is container ready and easily scalable.
    • The Java service is thin/transparent. You know exactly what happens (not much) and it has the potential to be a lot faster than products which provide more functionality which you might not need.
    • The service can be enriched with whatever custom functionality you like. Products such as ORDS and the PL/SQL gateway are often more difficult to extend and you are not allowed to alter the (closed source) products themselves.
  • Not so tight coupling between service and database.
    The database code is immutable and only a single version of the service is required. If the messages change which are exchanged (because of changes in the database code), the service does not need to be changed. If the service is build by another team as the database code, these teams do not need to coordinate their planning and releases.

There are of course some drawbacks;

  • Some changes still require redeployment of the service
    If the database itself changes, for example gets a new hostname or requires a new JDBC driver to connect to, the service most likely needs to be redeployed. In a container environment however, you can do this with a rolling zero-downtime upgrade.
  • Custom code is your own responsibility
    The service is quickly put together custom code which has not proven itself for production use. I can only say: ‘trust me, it works.. (probably) ;)’
    • There has not been extensive testing. I didn’t take the effort of mocking an Oracle database (JDBC, Oracle database with custom objects, procedures) in a test. Sorry about that.
    • Documentation is limited to this blog post and the comments in the code. 
    • There is no software supplier who you can go to to ask for support, report bugs or you can use to avoid the responsibility of having to deal with issues yourselves. 
  • Your database developers will create functionality
    You’re completely dependent on your database developers to implement service functionality. This can be a benefit or drawback, dependent on the people you have available.
  • This solution is Oracle database specific
    You’re going to use PL/SQL to implement services. It is not easily portable to other databases. If you do not have a specific reason to implement business logic in your database, do not go this way and cleanly split data and logic preferably in different systems.

JSON in the Oracle database

The example service which has been provided offers little functionality. Functionality is of course customer specific. A challenge can be to process a body and formulate a resposponse from the database. A reason for this is that the request and response body might contain JSON. JSON functionality has only recently been introduced in the Oracle database. A few packages/procedures in 12c and a lot more functionality in 18c and 19c. 11g however offers close to nothing. For 11g there are some alternatives to implement JSON. See for example here. Installing APEX is the easiest.This provides the APEX_JSON package which has a lot of functionality. This package is part of the APEX runtime so you do not need to install the entire development environment. An alternative is the open source library PL/JSON here or if you don’t care about breaking license agreements, you can use the following (of course without any warranties or support).

Suggested improvements to the Java service

The sample service is provided as a minimal example. It does not catch errors and create safe error messages from them. This is a security liability since information on the backend systems can arrive at the user of the service. Also of course as indicated, the service is not secured. It might be vulnerable to DOS attacks and anyone who can call the service can access the database procedure. I’ve not looked at tuning the connection pool yet. Of course you should pay attention to the PROCESSES, SESSIONS, OPEN_CURSORS settings and others of the database. Especially if the service receives lots of calls and has a lot of instances. I’ve not looked at behavior at high concurrency. The service could be re-implemented using for example Spring WebFlux and reactive JDBC drivers to make a single instance more scalable. Of course you can consider implementing a service result cache, preferable by using an external cache (to share state over service instances).

The post A transparent Spring Boot REST service to expose Oracle Database logic appeared first on AMIS Oracle and Java Blog.

Apache Camel and Spring Boot: Calling multiple services in parallel and merging results

$
0
0

Sometimes you have multiple services you want to call at the same time and merge their results when they’re all in (or after a timeout). In Enterprise Integration Patterns (EIP) this is a Splitter followed by an Aggregator. I wanted to try and implement this in Spring Boot using Apache Camel so I did. Since this is my first quick try at Apache Camel, I might not have followed much best practices. I used sample code from Baeldungs blog, combined it with this sample of sending parallel requests using Futures. You can browse my code here.

Run the sample

First clone https://github.com/MaartenSmeets/camel-samples.git

Next

mvn clean package
java -jar .\target\simplerouter-1.0-SNAPSHOT.jar

What does it do?

I can fire off a request to an api-splitter endpoint with a JSON message. It then calls 2 services in parallel (using separate threads) and merges their results. The combined result is returned.

It is pretty fast; response times on my laptop around 10ms.

How does it work?

Two separate services are hosted to accept requests. The api-splitter and the api itself. Of course next to the api-docs which are in Swagger v2 format so you can easily import that in a tool like Postman.

Camel uses Components. ‘http’ is such a Component and so is ‘direct’. These components have their specific configuration. There is a REST DSL available to expose endpoints and indicate which requests are accepted and should go where. The DSL can indicate to which component a request should go. ‘direct’ components can be called from the same CamelContext. Here most of the ‘heavy lifting’ happens in my example. The from and to syntax for forwarding requests is pretty straightforward and the RouteBuilder is easy to use.

There is an object available to map the JSON request to in this example. You’re not required to map the JSON to a Java object but for processing inside your Java code, this can come in handy. The api-splitter calls a direct:splitter which creates two Futures do do the calls in parallel to the local api (which maps the JSON to a Java object and does some processing). The result when received is then parsed to JSON and the results from both services are merged in a single array. Below a small picture of how this looks.

Finally

A nice first experience with Apache Camel. I was having some challenges with TypeConverters and getting the body in the correct shape/type but beyond that the experience was quite good. It is relatively easy to use (in this small scenario), integrates well with Spring Boot and my first impression is that it is quite flexible. Also the default logging provides useful information on the service call stack.

Of course what I didn’t implement was a message agostic routing mechanism and I haven’t checked what happens when one of the services doesn’t respond; you want to provide a timeout you can do so in the HTTP Component. The code around the Futures will require some nice exception handling though in order to return a nice message if one of the calls fails and the other doesn’t.

The post Apache Camel and Spring Boot: Calling multiple services in parallel and merging results appeared first on AMIS Oracle and Java Blog.

Microservice framework startup time on different JVMs (AOT and JIT)

$
0
0

When developing microservices, a fast startup time is useful. It can for example reduce the amount of time a rolling upgrade of instances takes and reduce build time thus shortening development cycles. When running your code using a ‘serverless’ framework such as for example Knative or FnProject, scaling and getting the first instance ready is faster.

When you want to reduce startup time, an obvious thing to look at is ahead of time (AOT) compilation such as provided by an early adopter plugin of GraalVM. Several frameworks already support this out of the box such as Helidon SE, Quarkus and Micronaut. Spring will probably follow with version 5.3 Q2 2020. AOT code, although it is fast to startup, still shows differences per framework. Which framework produces the native executable which is fastest to start?

If you need specific libraries which cannot be natively compiled (not even when using the Tracing Agent), using Java the old-fashioned JIT way is also an option. You will not achieve start-up times near AOT start-up times but by choosing the right framework and JVM, it can still be acceptable.
In this blog post I’ll provide some measures which I did on start-up times of minimal implementations of several frameworks and an implementation with only Java SE. I’ve looked at both JIT and AOT (wherever this was possible) and ran the code on different JVMs.

Disclaimer

These measures have been conducted on specific hardware (my laptop) using specific test scripts on specific though minimal implementations (but comparable). This is of course not the same as a full blown application running in production on specialized hardware. Use these measures as an inspiration to get a general idea about what differences between startup time might be. If you want to know for sure if these differences are similar for you, conduct your own tests which are representative for your situation on your hardware and see for yourself.

Setup

At the end of this blog post you can find a list of framework versions which have been used for my tests. The framework implementations which I’ve used can be found here.

Measuring startup time

In order to determine the startup time, I looked at the text line in the logging where the framework indicated it was ready.

  • Helidon SE
    WEB server is up!
  • Micronaut
    Startup completed in
  • Microprofile (Open Liberty)
    server is ready to run a smarter planet
  • Spring Boot and related
    JVM running for
  • Vert.x
    Succeeded in deploying verticle
  • Akka
    Server online at
  • Quarkus
    started in

I wanted to measure the time between the java command to run the JAR file and the first occurance of the above lines. I found the magic on how to do this here. Based on this I could execute the following to get the wanted behavior.

expect -c "spawn JAVACOMMAND; expect \"STRING_TO_LOOK_FOR\" { close }" &gt; /dev/null 2&gt;&amp;1

Next I needed to time that command. In order to do that, I did the following:

ts=$(date +%s%N)
expect ...
echo ADDITIONAL_INFO_ABOUT_MEASURE,$((($(date +%s%N) - $ts)/1000000)) &gt;&gt; output.txt

I did this instead of using the time command because of the higher accuracy and because of the way I piped the expect output to /dev/null.

Implementing a timeout

I noticed sometimes the expect command left my process running. I did not dive into the specifics as to why this happened, but it caused subsequent tests to fail since the port was already claimed. I installed the ‘timelimit’ tool and specified both a WARN and KILL signal timeout (timelimit -t30 -T30). After that I did a ‘killall -9 java’ just to be sure. The tests ran a long time and during that time I couldn’t use my laptop for other things (it would have disturbed the tests). Having to redo a run can be frustrating and is time consuming. Thus I want to be sure that after a run the java process is gone.

JVM arguments

java8222cmd="/jvms/java-8-openjdk-amd64/bin/java -Xmx1024m -Xms1024m -XX:+UseG1GC -XX:+UseStringDeduplication -jar"
java1104cmd="/jvms/java-11-openjdk-amd64/bin/java -Xmx1024m -Xms1024m -XX:+UseG1GC -XX:+UseStringDeduplication -jar"
javaopenj9222="/jvms/jdk8u222-b10/bin/java -Xmx1024m -Xms1024m -Xshareclasses:name=Cache1 -jar"
javaoracle8221="/jvms/jdk1.8.0_221/bin/java -Xmx1024m -Xms1024m -XX:+UseG1GC -XX:+UseStringDeduplication -jar"

The script to execute the test and collect data

I created the following script to execute my test and collect the data. The Microprofile fat JAR generated a large temp directory on each run which was not cleaned up after exiting. This quickly filled my HD. I needed to clean it ‘manually’ in my script after a run.

Results

The raw measures can be found here. The script used to process the measures can be found here.

JIT

The results of the JIT compiled code can be seen in the image above. 

  • Of the JVMs, OpenJ9 is the fastest to start for every framework. Oracle JDK 8u221 (8u222 was not available yet at the time of writing) and OpenJDK 8u222 show almost no difference.
  • Of the frameworks, Vert.x, Helidon and Quarkus are the fastest. 
  • Java 11 is slightly slower than Java 8 (in case of OpenJDK). Since this could be caused by different default garbage collection algorithms I forced them both to G1GC. This has been previously confirmed here. Those results cannot be compared to this blog post one-on-one since the tests in this blog post have been executed (after ‘priming’) on JVMs running directly on Linux (and on different JVM version). In the other test, they were running in Docker containers. In a Docker container, more needs to be done to bring up an application such as loading libraries which are present in the container, while when running a JVM directly on an OS, shared libraries are usually already loaded, especially after priming.
  • I also have measures of Azul Zing. However I tried using the ReadyNow! and Compile Stashing features to reduce startup time, I did not manage to get the startup time even close to the other JVMs. Since my estimate is I must have done something wrong, I have not published these results here.

AOT

JIT compiled code startup time does not appear to correspond to AOT code start-up time in ranking. Micronaut does better than Quarkus in the AOT area. Do notice the scale on the axis. AOT code is a lot faster to startup compared to JIT code.

Finally

File sizes

See the below graph for the size of the fat JAR files which were tested. The difference in file size is not sufficient to explain the differences in startup time. Akka is for example quite fast to startup but the file size is relatively large. The Open Liberty fat JAR is huge compared to the others but its start-up time is much less than to be expected based on that size. The no framework JAR is not shown but it was around 4Kb.

Servlet engines

The frameworks use different servlet engines to provide REST functionality. The servlet engine alone however says little about the start-up time as you can see below. Quarkus, one of the frameworks which is quick to start-up, uses Reactor Netty, but so do Spring WebFlux and Spring Fu which are not fast to start at all.

Other reasons?

An obvious reason Spring might be slow to start, can be because of the way it does its classpath scanning during startup. This can cause it to be slower than it could be without this. Micronaut for example, processes annotations during compile time taking away this action during startup and making it faster to start.

It could be a framework reports itself to be ready while the things it hosts might not be fully loaded. Can you trust a framework which says it is ready to accept requests? Maybe some frameworks only load specific classes at runtime when a service is called and others preload everything before reporting ready. This can give a skewed measure of startup time of a framework. What I could have done is fire off the start command in the background and then fire of requests to the service and record the first time a request is successfully handled as the start-up time. I however didn’t. This might be something for the future.

The post Microservice framework startup time on different JVMs (AOT and JIT) appeared first on AMIS Oracle and Java Blog.

Calling an Oracle DB stored procedure from Spring Boot using Apache Camel

$
0
0

There are different ways to create data services. The choice for a specific technology to use, depends on several factors inside the organisation which wishes to realize these services. In this blog post I’ll provide a minimal example of how you can use Spring Boot with Apache Camel to call an Oracle database procedure which returns the result of an SQL query as an XML. You can browse the code here.

Database

How to get an Oracle DB

Oracle provides many options for obtaining an Oracle database. You can use the Oracle Container Registry (here) or use an XE installation (here). I decided to build my own Docker image this time. This provides a nice and quick way to create and remove databases for development purposes. Oracle provides prepared scripts and Dockerfiles for many products including the database, to get up and running quickly.

  • git clone https://github.com/oracle/docker-images.git
  • cd docker-images/OracleDatabase/SingleInstance/dockerfiles
  • Download the file LINUX.X64_193000_db_home.zip from here and place it in the 19.3.0 folder
  • Build your Docker image: ./buildDockerImage.sh -e -v 19.3.0
  • Create a local folder. for example /home/maarten/dbtmp19c and make sure anyone can read, write, execute to/from/in that folder. The user from the Docker container has a specific userid and by allowing anyone to access it, you avoid problems. This is of course not a secure solution for in production environments! I don’t think you should run an Oracle Database in a Docker container for other then development purposes. Consider licensing and patching requirements.
  • Create and run your database. The first time it takes a while to install everything. The next time you start it is up quickly.
    docker run –name oracle19c -p 1522:1521 -p 5500:5500 -e ORACLE_SID=sid -e ORACLE_PDB=pdb -e ORACLE_PWD=Welcome01 -v /home/maarten/dbtmp19c:/opt/oracle/oradata oracle/database:19.3.0-ee
  • If you want to get rid of the database instance
    (don’t forget the git repo though)
    docker stop oracle19c
    docker rm oracle19c
    docker rmi oracle/database:19.3.0-ee
    rm -rf /home/maarten/dbtmp19c
    Annnnd it’s gone!

Create a user and stored procedure

Now you can access the database with the following credentials (from your host). For example by using SQLDeveloper.

  • Hostname: localhost 
  • Port: 1522 
  • Service: sid 
  • User: system 
  • Password: Welcome01

You can create a testuser with

alter session set container = pdb;

-- USER SQL
CREATE USER testuser IDENTIFIED BY Welcome01
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP";

-- ROLES
GRANT "DBA" TO testuser ;
GRANT "CONNECT" TO testuser;
GRANT "RESOURCE" TO testuser;

Login to the testuser user (notice the service is different)

  • Hostname: localhost 
  • Port: 1522 
  • Service: pdb 
  • User: testuser 
  • Password: Welcome01

Create the following procedure. It returns information of the tables owned by a specified user in XML format.

CREATE OR REPLACE PROCEDURE GET_TABLES 
(
  p_username IN VARCHAR2,RESULT_CLOB OUT CLOB 
) AS
p_query varchar2(1000);
BEGIN
  p_query := 'select * from all_tables where owner='''||p_username||'''';
  select dbms_xmlgen.getxml(p_query) into RESULT_CLOB from dual;
END GET_TABLES;

This is an easy example on how to convert a SELECT statement result to XML in a generic way. If you need to create a specific XML, you can use XMLTRANSFORM or create your XML ‘manually’ with functions like XMLFOREST, XMLAGG, XMLELEMENT, etc.

Data service

In order to create a data service, you need an Oracle JDBC driver to access the database. Luckily, recently, Oracle has put its JDBC driver in Maven central for ease of use. Thank you Kuassi and the other people who have helped making this possible!

        <dependency&gt;
            <groupId&gt;com.oracle.ojdbc</groupId&gt;
            <artifactId&gt;ojdbc8</artifactId&gt;
            <version&gt;19.3.0.0</version&gt;
        </dependency&gt;

The Spring Boot properties which are required to access the database:

  • spring.datasource.url=jdbc:oracle:thin:@localhost:1522/pdb
  • spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
  • spring.datasource.username=testuser
  • spring.datasource.password=Welcome01

The part of the code which actually does the call, prepares the request and returns the result is shown below. 

The template for the call is the following:

sql-stored:get_tables('p_username' VARCHAR ${headers.username},OUT CLOB result_clob)?dataSource=dataSource

The datasource is provided by Spring Boot / Spring JDBC / Hikari CP / Oracle JDBC driver. You get that one for free if you include the relevant dependencies and provide configuration. The format of the template is described here. The example illustrates how to get parameters in and how to get them out again. It also shows how to convert a Clob to text and how to set the body to a specific return variable.

Please mind that if the query does not return any results, the OUT variable is Null. Thus getting anything from that object will cause a NullpointerException. Do not use this code as-is! It is only a minimal example

You can look at the complete example here and build it with maven clean package. The resulting JAR can be run with java -jar camel-springboot-oracle-dataservice-0.0.1-SNAPSHOT.jar. 

Calling the service

The REST service is created with the following code:

It responds to a GET call at http://localhost:8081/camel/api/in

Finally

Benefits

Creating data services using Spring Boot with Apache Camel has several benefits:

  • Spring and Spring Boot are popular in the Java world. Spring is a very extensive framework providing a lot of functionality ranging from security, monitoring, to implementing REST services and many other things. Spring Boot makes it easy to use Spring.
  • There are many components available for Apache Camel which allow integration with diverse systems. If the component you need is not there, or you need specific functionality which is not provided, you can benefit from Apache Camel being open source.
  • Spring, Spring Boot and Apache Camel are solid choices which have been worked at for many years by many people and are proven for production use. They both have large communities and many users. A lot of documentation and help is available. You won’t get stuck easily.

There is a good chance that when implementing these 2 together, You won’t need much more for your integration needs. In addition, individual services scale a lot better and usually have a lighter footprint than for example an integration product running on an application server platform.

Considerations

There are some things to consider when to using these products such as;

  • Spring / Spring Boot do not (yet) support GraalVMs native compilation out of the box. When running on a cloud environment and memory usage or start-up time matter, you could save money by for example implementing Quarkus or Micronaut. Spring will support GraalVM out of the box in version 5.3 expected Q2 2020 (see here). Quarkus has several Camel extensions available but not the camel-sql extension since that is based on spring-jdbc.
  • This example might require specific code per service (depending on your database code). This is custom code you need to maintain and might have overhead (build jobs, Git repositories, etc). You could consider implementing a dispatcher within the database to reduce the amount of required services. See my blog post on this here (consider not using the Oracle object types for simplicity). Then however you would be adhering to the ‘thick database paradigm’ which might not suite your tastes and might cause a vendor lock-in if you start depending on PL/SQL too much. The dispatcher solution is likely not to be portable to other databases.
  • For REST services on Oracle databases, implementing Oracle REST Data Services is also a viable and powerful option. Although it can do more, it is most suitable for REST services and only on Oracle databases. If you want to provide SOAP services or are also working with other flavors of databases, you might want to reduce the amount of different technologies used for data services to allow for platform consolidation and make your LCM challenges not harder than they already might be.

The post Calling an Oracle DB stored procedure from Spring Boot using Apache Camel appeared first on AMIS Oracle and Java Blog.

Viewing all 163 articles
Browse latest View live