Next OpenShift users meeting

The next meeting of the German speaking OpenShift community will take place on September 30, 2019 in Frankfurt. So block your calendar! Looking forward to see you there!

More information on Call for Papers & registrations options can be found here


Register now for the next OpenShift Users meeting on 17th October

Registration os now open for the next edition of OpenShift Users meeting at the Accenture Campus Kronberg on 17th October!

This time we have something special for you: it will be the first meeting with users from Germany, Austria and Switzerland for up to 200 participants. And we’re happy to host Reza Shafii (former VP of products at CoreOS) as one of our keynote speakers.

More information & registration can be found here:


The best OpenShift Labs from Red Hat Summit 2018

At this year’s Red Hat Summit there have been some great sessions on containers and OpenShift. All session materials and source have already been published on GitHub:

Here are the best sessions for OpenShift users:

Have fun!


How to use custom Docker images on OpenShift Online

This quick tip describes how to use your custom built docker images on OpenShift Online (Red Hat’s SaaS based container platform).

Create a new project and get login token

Login to OpenShift Online at and create a new project (for the sake of this demo I use “my-external-project” as the name for it).

Click of the question mark (upper right corner) and select “Command Line Tools”.

Then copy your access token into your clipboard according to the instructions shown.

Login to the OpenShift Online Docker Registry

Now login to the OpenShift Online Docker registry using the following command:



docker login -u -p SECRET_TOKEN

If the connection can be established you should receive a “Login succeeded” message.

Prepare your local image to be pushed

In this demo I will be using a the basic “hello-world” image provided by Docker. Firstly you will need to find out the image id:

docker images

Now we tag the image with a new repository id:

docker tag IMAGE_ID


docker tag f2a91732366c

Now you should see the newly tagged image in your local registry:

Now we can directly push the image to the OpenShift Online registry (since we’ve authenticated before):

docker push

In the background OpenShift Online will import the image into its internal registry and will furthermore create an according image stream for it.

The resulting image stream can then be used to create a new deployment from that image. Click “Add to Project” and then “Deploy image”:

Have fun bringing your own Docker based applications onto OpenShift Online!



Background information

Managing images


Getting started with Ansible Playbook Bundles on CDK

Install Ansible Service Broker addon into your CDK installation

Start your CDK environment with registration (important as yum get’s used during addon installation)

minishift start --service-catalog

Clone the addon repository and install Ansible Service Broker addon:

git clone
cd minishift-addons/add-ons/
minishift addon install ansible-service-broker
minishift addon apply ansible-service-broker

When logging in to your CDK console you should already see the preinstalled APB’s:

Install the ABP command line on your client machine

Configure your shell to use the minishift Docker daemon

eval $(minishift docker-env)

Fetch the APB command line script and make it available in your PATH

wget && mv apb && chmod +x apb

Verify that your installation works

apb --help

If everything went well you should see something like this:

Test the connection between ABP CLI and CDK

You will need special permissions to work with the broker on your CDK installation. Therefore we need to execute the following:

oc login -u system:admin
oc adm policy add-cluster-role-to-user cluster-admin developer
oc login -u developer

Now let’s see if we can list the preinstalled APB’s:

apb list

Configure Ansible Service Broker to pull images from local registry

In the default config of our Ansible Service Broker the APB’s are pulled from “”. We need to change this to our local registry running within CDK.

  - type: local_openshift
    name: lo
      - openshift
      - ".*-apb$"

The Ansible Service Broker pod now needs to be restarted in order to pull in the new configuration.

Create your first APB

Firstly we use the CLI to scaffold our new service:

apb init sample-service-apb
cd sample-service-apb

Now we locally build our APB. After the process has completed, the newly build APB docker image should appear in your local docker registry:

apb build

Finally we need to push the Docker image to our Service Broker inside CDK:

apb push

Now you should be able to see your first APB in your CDK’s service catalog.

Reference Information

Ansible Playbook Documentation

Getting started with APB development

Minishift Addon – Ansible Service Broker

Ansible Service Broker Configuration


Solving problems with Eclipse Secure Storage

I am an active long time user of Eclipse and Red Hat’s distribution called JBoss Developer Studio ( The tool suite has a very nice feature called Eclipse Secure Storage, which allows me to save development related passwords (Github, OpenShift, etc.) in a secure manner on my local system. However, in the last couple of months I regularly had problems with Eclipse Secure Storage feature not allowing me to use/save any passwords. Even reinstalling and deleting user specific preferences in my home folder did not work. If you google for this type of problem you’ll find a number of users reporting similar issues, but neither a resolution nor a proper workaround.

Today I managed to fix my problem on Mac OS as follows:

  1. Open the “Keychain Access” application.
  2. Search for an entry called “” and delete it.
  3. Open JBoss Developer Studio and go to Preferences.
  4. Open the “Secure Storage” preferences and delete the entry “[Default Secure Storage]”. Be aware that you saved passwords are lost by doing this.
  5. Restart JDBS and try to save your passwords again.


Additional things to try



How to set up wordpress on OpenShift in 10 minutes

What this is about?

A lot of customers would like to give the brave new container world (based on Docker technology) a try with real life workload. The WordPress content management system (yes, it has become more than a simple blog) seems to be an application that many customers know and use (and that I’ve been asked for numerous times). From a technical point of view the WordPress use case is rather simple, since we only need a PHP runtime and a database such as MySQL. Therefore it is a perfect candidate to pilot container aspects on OpenShift Container Platform.


Install Container Development Kit

I highly recommend to install the freely available Red Hat Container Development Kit (shortly CDK). It will give you a ready to use installation of OpenShift Container Platform based on a Vagrant image. So you’re up to speed in absolutely no time:

Please follow the installation instructions here:

Setup resources on OpenShift

Spin up your CDK environment and ssh into the system:

vagrant up
vagrant ssh

Create a new project and import the template for an ephemeral MySQL (since this is not included in the CDK V2.3 distribution by default). If you prefer to use another database or even one with persistent storage, then you can find additional templates here.

oc new-project wordpress
oc create -f

Now we create one pod for our MySQL database and create our WordPress application based on the source code. OpenShift will automatically determine that it is based on PHP and will therefore choose the PHP builder image to create a Docker image from our WordPress source code.

oc new-app mysql-ephemeral
oc new-app
oc expose service wordpress

Now let’s login to the OpenShift management console and see what has happened:

We now have a pod that runs our WordPress application (web server, PHP, source code) and one pod running our ready to use ephemeral (= non-persistent) MySQL database.

Install wordpress

Before we need to note down the connection settings for our MySQL database. Firstly we look up the cluster IP of our mysql service; secondly we look up the database name, username & password. Have a look at the following screenshots:

Now it is time to setup and configure wordpress. Simply click on the route that has been created for your wordpress pod (in my case the hostname is “”).

Congratulations for installing WordPress on OpenShift!

What’s next

For now we’ve created all the resources manually in a not yet reusable fashion. Therefore one of the next steps could be to create a template from our resources, import it into the OpenShift namespace and make it available for our users as a service catalog item. So our users could provision a fully installed WordPress with the click of a button.


My personal look at the German eID system (“Neuer Personalausweis”)

Business Problem

Many business processes in Germany involve paper (or better TONS OF PAPER!) and surely many manual steps: think of opening a bank account or registering a car at your local “Zulassungsstelle”. In my opinion one of the main reasons for that is that the identity of a user cannot be properly verified online. You could now argue that things like video identification or Deutsche Post PostIdent came up to address this problem. However this only solves part of the problem, since the signature still needs to be done manually.

In Germany the so called nPA (neuer Personalausweis) is able to solve this problem by providing a qualified signature. So you will be able to digitally sign contracts online. Therein lays the potential to completely transform tons of paper-based processes. And huge amounts of time and money could be saved as well!


Use cases of the eID system

The nPA has two main functions “Identification with Online-Ausweisfunktion” and “Electronic Signature”, which allow to implement many exciting use cases. These range from simple verifications (like age check, address validation) to login mechanisms for websites (the nPA can be considered as a single-sign-on system in this context). Moreover the nPA also allows to apply a qualified digital signature to documents, which is equal to a genuine signature (according to German law).

Since its launch in 2010 a couple of federal institutes and enterprises have made their services ready for the nPA:

  • ElsterOnline (German tax)
  • Rentenkonto online (German pension fund)
  • Punkteauskunft aus dem Verkehrszentralregister (VZR)
  • UrkundenService
  • Allianz Kundenportal

A complete list of applications can be found here: However, from my perception the adoption still leaves a lot of room for improvement.

Architectural overview

There is extensive documentation available which describes the technical architecture behind the eID system (personally I recommend the information from BSI found here: That it why I do not want to go into the nitty gritty details.

However, to give you a rough understanding have a look at the following illustration, which looks similar to what is available in token based authentication systems (think of SAML and/or OpenID Connect concepts). There is something like a service provider (“WebServer”) who wants to protect a service; then an authority that is able to validate the identity (“eID-Server”); and a login component (“AusweisApp”) that allows the end user to enter login information like a PIN. Last but not least the user must have a card reader connected to his local system, which talks to the login component (“AusweisApp”).


It is important to understand that the login component (“AusweisApp”) is implemented as a standalone application, which must run on the user’s computer (and of course be installed beforehand). For 2017 it is planned to release mobile versions of the app (see Google Play Store) in order to use a mobile device as a card reader. In my opinion this will help to reduce the overall complexity from an end user’s perspective.

When looking at the system from a service provider’s point of view (e.g. I am an online shop provider who wants to enable users to login with their nPA), you have to consider a lot of things. Since their is neither a public instance of the “eID-Server” nor source code available, you have two options: create your own implementation based on the BSI spec or buy the service from a provider. Additionally you will have to think of how to integrate the token into your application: since there is no “reference implementation” of the “eID-Server” spec there is little to no documentation available. Overall the process feels rather complex and intransparent to me.

A detailed description of the application process can be found here: “Become Service Provider”.


The opportunity behind the German eID system is really huge and could speed up lots of processes and make all of our lives easier. But in my opinion there are a lot of things hindering the adoption and success of the system:

  1. There is no public eID-Server instance that can be used by public and private institutions. This makes the adoption unnecessarily complicated because all service providers have to find a solution for themselves.
  2. Little documentation for service providers available. Instead only tons of specs available that need a lot of work lifted by the service provider.
  3. Many services require that you map your eID to the identity in their system (at least once). This makes the process very uncomfortable for the end user.
  4. Currently an external card reader is needed. Firstly it has to be bought by the end user and secondly this does not work on the go. Fortunately this caveat has already been addressed with the mobile app version.

My final thoughts: the adoption cannot be forced by laws. Instead, I think that the eID system should be developed in a more transparent and community based manner. Moreover the integration by service providers should be as easy as putting a social login on my personal website.



New interview on mobile published at JAXCenter

„Man sollte auf Standards setzen und nicht für jede Applikation ein Silo aufbauen“

Looking forward to kick-off a discussion with you! 🙂


JBoss Mobile Publications

Faster and more efficient processes by combining BPM and Mobile

A. Synopsis

What this is about

A lot has happened in the area of mobile since Apple kicked off the revolution by announcing the first iPhone. However, the overall mobile market still has to be considered as young and especially unstandardized. This really puts a lot of organizations in front of huge challenges concerning the efficient development of mobile applications and the secure integration into backend IT systems.

But there is no way around mobile in the next years! The smart combination of mobile techniques (MBaaS, micro services, etc.) and business process management approaches will definitely drive process efficiency and speed to a whole new level.

The use case or “What if the process was at the fingertips of your customer?”

This showcase addresses a scenario that almost all enterprises in the insurance industry are facing: nowadays users expect to be able to contact their insurance 24/7 on an ad-hoc basis (e.g. for opening a claim or just for asking a question concerning their policy). Additionally they want to see on demand what the status of a certain request is. From an enterprise point of view insurances are looking at new ways on reacting to this new speed of communication and transparency. They’re also thinking of new concepts to efficiently integrate agencies and remote workers in their existing processes. They key consequence to address these requirements is to enhance existing input & output management infrastructure by a newly established mobile channel.

In this showcase we used Red Hat Mobile Application Platform ( as a key building block to efficiently and securely connect the outside world with existing enterprise systems.


Through the platform approach we do not need to reinvent the wheel for each mobile app on the horizon. Instead we put in place a centralized platform for developing and running mobile application in a standardized manner.

The use of Red Hat Mobile Application Platform (RHMAP) comes with the following benefits:

  • Agile approach to developing, integrating, and deploying enterprise mobile applications—whether native, hybrid, or on the web
  • Out-of-the-box automated build processes (including build farm)
  • A service catalog for reusable connectors to backends
  • Easy scale-out through cloud native architecture
  • Collaborative development across multiple teams and projects with a wide variety of leading tool kits and frameworks

Architectural overview

From a technical point of view the showcase is comprised of three main building blocks:

  • CLIENT LAYER: Hybrid mobile applications running on the end user devices
  • CLOUD LAYER: Node.js based backend running in the cloud on RHMAP
  • BACKEND LAYER: Set of business process applications running on JBoss BPM Suite as the underlying BPM engine

ARC_OVERVIEW - Component model

Client layer

Since we have two different user groups (external end customer and employees) we’ve decided to develop two separate applications:

  • Customer App: This app is meant to be used by our end customers (using a broad range of different mobile devices) and has therefore being implemented especially with hybrid app development principles in mind. We chose Apache Cordova ( as our core development framework, which enables us to build our app against all common mobile OS with only one code base (“develop once, run everywhere” principle). In terms of the UI and application framework we decided to go for a combination of ionic ( and AngularJS ( Both projects have a vibrant and active community and have been successfully adopted by many projects.

  • Employee App: This app targets remote workers (such as insurance agencies e.g.) who shall work on our processes from remote. We’ve decided to go for the same hybrid app approach in order to share code and speed up development. However, for such an end user group where we might influence the use of certain device types (such as Apple iPhone) we could have also thought about a native app (RHMAP provides an SDK for all popular mobile OS; so we could also reuse existing backend functionality in our cloud layer).

The source code of both applications is hosted on RHMAP which allows us to make use of the built-in build farm (allowing us to create push button builds for iOS, Android et al), configure and also preview the application.

Client application in RHMAP

Cloud layer

The cloud part of an application built with RHMAP is comprised of a so called “Cloud Code App” providing the core functionality for our clients and a set of reusable MBaaS services that enable the connectivity to 3rd party (backend) systems. The following illustration shows an overview of all components created for our showcase:

Application overview in Red Hat Mobile Application Platform

Cloud code apps

For our showcase we’ve implemented a single Node.js based app called Cloud App which accepts all incoming requests from our client layer. RHMAP provides a feature rich development framework (including custom Node.js convenience modules) making the creation of cloud code apps easy and efficient. Through the use of Node.js as our programming language we get all the benefits of its evented and asynchronous model, that works extremely well with our use case of a data intense realtime application (DIRT paradigm).

MBaaS services (Mobile backend as a service)

An MBaaS (Mobile Backend-as-a-Service) is the primary point of contact for end user applications – both mobile and web. The MBaaS hosts Node.js applications – as REST API servers and/or Express.js based web apps. The primary purpose of the MBaaS is to allow users (developers) of RHMAP to deploy Node.js server-side for their mobile apps. The MBaaS also provides functionality such as caching, persistence, data synchronization and a range of other mobile-centric functionality. Multiple MBaaS may be utilized for customer segregation and/or lifecycle management (environments).

For this showcase we’ve developed a new MBaaS connector called fh-connector-jbpm-cloud, which is meant to be reused across multiple applications hosted on RHMAP. For the use in our project we’ve instantiated it and configured the environment variables to connect to our specific JBoss BPM Suite in the backend layer.

RHMAP MBaaS BPM connector

Function wise the MBaaS connector currently provides the following functionality:

  • Process management
    • Start process
    • Get process instance
  • Task management
    • Load tasks
    • Load task content
    • Claim task
    • Complete task
    • Release task
    • Start task

Push notifications

We make use of the RHMAP built-in mobile push API which provides a generic way to interface with multiple push networks (Google Cloud Messaging, Apple Push Notification Service and Microsoft Push Notification Service) via REST or Node.js. This makes it very convenient to send out push notifications from 3rd party application (such as JBoss BPM Suite as demonstrated in our showcase).

RHMAP Push Configuration

More information on the push API can be found here

Backend layer

This layer is comprised of a large set of different backend systems that typically run inside the datacenter of an organization; such as application servers, databases, messaging systems or ESB-like services. For the sake of this showcase we’ve chosen JBoss BPM Suite (( as the only system in here. The BPM Suite provides a full blown authoring and runtime environment for business process applications focused on the use of open standards (such as BPMN 2.0). The included BPM engine also exposes a rich REST API that is used extensively by our MBaaS connector fh-connector-jbpm-cloud to start new process instances, control the process flow etc.

Request processing application

The core business process for our scenario is implemented as a simple BPMN 2.0 workflow that is being deployed in form of our Java based Request Processing Application.

Business Process Diagram

After being instantiated the process firstly sends out a push notification to the requesting customer by simply calling the RHMAP push API. Secondly a human task called Process request is used to create a new work item in the work basket of our employees. Through our Employee App mobile application we empower remote employees to directly work on the request.

In addition to that the work items can be claimed via a traditional web-based application named Business Central, that is provided as part of JBoss BPM Suite.

Edit task via Business Central

Based on the decision the process completes with an according push notification to inform the customer.

More information on how to develop process applications can be found in the JBoss BPM Suite Development Guide.

B. Walkthrough

1. Customer creates new request

Customer App - Login
Customer App - Dashboard
Customer App - Create new request
Customer App - Create new request
Customer App - Create new request
Customer App - Dashboard showing push
Customer App - Show process status
Customer App - Process instance details

2. a) Employee works on request

Request processing application - Work on task instance
Request processing application - View process model

3. b) Agency / Remote worker completes


4. Customer receives push updates on current status

Customer App - Push notification on process status
Customer App - View dashboard
Customer App - View process status

C. Reference Information

Source code

The source code can be found here:

Client layer

Cloud layer

Backend layer

D. Credits

Special thanks to Sebastian Dehn ( for implementing large parts of the client layer.