Skip to content

Feed aggregator

From a Commodore 64 to DevSecOps

Sonatype Blog - 1 hour 37 min ago
We all know the story: a farm, a kid, a Commodore 64, and a modem maxing out at 300 bps. A few unexpected phone bills later, and young Ian Allison is figuring out how to game the system so he can keep using his newfound  gateway to the world of tech. According to Ian, that is where he began...

To read more, visit our blog at
Categories: Companies

<header class="post-header" style=

Vanilla Java - 4 hours 40 min ago
In this part we look at putting a micro service together as a collection of services, and consider how we can evaluate the performance of these services. We introduce JLBH (Java Latency Benchmark Harness) to test these services.
Categories: Blogs

<header class="post-header" style=

Vanilla Java - 4 hours 41 min ago
A common issue we cover in our workshops is, how to restart a queue reader after a failure?The answer is not a simple as you might think.
Categories: Blogs

<h1 class="post-title" itemprop="name

Vanilla Java - 4 hours 42 min ago
One of the problems with using microservices is performance. Latencies can be higher due to the cost of serialization, messaging and deserialization, and this reduces throughput. In particular, poor throughput is a problem because the reason we are designing a scalable system[1] is to increase throughput.
Categories: Blogs

<h1 class="post-title" itemprop="name

Vanilla Java - 4 hours 42 min ago
In Part 1, we looked at how we can easily create and test components which expect asynchronous messages in and produce asynchronous messages out. However, how do we turn this into a service?
Categories: Blogs

<h1 class="post-title" itemprop="name

Vanilla Java - 4 hours 44 min ago
At a high level, different Microservice strategies have a lot in common. They subscribe to the same ideals. When it comes to the details of how they are actually implemented, they can vary.Microservices in the Chronicle world are designed around:
  • Simplicity- simple is fast, flexable and easier to maintain.
  • Transparency- you can‚Äôt control what you don‚Äôt understand.
  • Reproduceablity- this must be in your design to ensure a quality solution.
Categories: Blogs

<h1 class="post-title" itemprop="name

Vanilla Java - 4 hours 45 min ago
Microservices is a buzz word at the moment. Is it really something original or based on established best practices? There are some disadvantages to the way microservices have been implemented, but can these be solved?
Categories: Blogs

Creating Custom JDK9 Runtime Images [Video]

For a brief overview of how to take advantage of a new feature in JDK9, namely the ability to create custom runtime images, please view the YouTube video that follows. As an addendum to this video, a recently published blog explains how the creation of runtime images can be automated inside a NetBeans project. Please check out Automating the Creation of JDK9 Reduced Runtime Images for further edification.

Categories: Communities

An Introduction to Functional Programming in Java 8 (Part 3): Streams

In the last part, we learned about the Optional type and how to use it correctly.

Today, we will learn about Streams, which you use as a functional alternative of working with Collections. Some method were already seen when we used Optionals, so be sure to check out the part about Optionals.

Categories: Communities

Onion Architecture Is Interesting

After Layered and Hexagonal architectures, the time has come to talk about their close cousin ‚Äď the Onion Architecture initially introduced in series of posts by Jeffrey Palermo.

What is Onion Architecture?

As we said in the introduction, the concept of Onion Architecture is closely connected to two other architectural styles ‚Äď Layered and Hexagonal. Similarly to the Layered approach, Onion Architecture uses the concept of layers, but they are a little different:

Categories: Communities

Start IntelliJ IDEA from the command line

No Relation To -Emmanuel Bernard - Mon, 02/27/2017 - 01:00

You can start IntelliJ IDEA from the command line which is handy when you live in a terminal like me. But you need to enable that feature.

Open IntelliJ IDEA, go to Tools->Create Command-Line Launcher... and optionally adjust the location and name of the script that will start IntelliJ IDEA. Voilà! Now from your command line, you can type:

  • idea . to open the project in the current directory
  • idea pom.xml to import the Maven project
  • idea diff <left> <right> to launch the diff tool.

The generated script has an annoying flaw though, it does reference your preference and cache directories in a hard coded fashion. And for some reason the IntelliJ folks embed the version number in these directories (e.g. IdeaIC2016.2) That's annoying as it will likely break the minute you move to another (major?) version.

Antonio has a solution for that which is a simpler and more forgiving script in good anti-fragile fashion. The script is not generic and only runs for macOS.


# check for where the latest version of IDEA is installed
IDEA=`ls -1d /Applications/IntelliJ\ * | tail -n1`

# were we given a directory?
if [ -d "$1" ]; then
#  echo "checking for things in the working dir given"
  wd=`ls -1d "$1" | head -n1`

# were we given a file?
if [ -f "$1" ]; then
#  echo "opening '$1'"
  open -a "$IDEA" "$1"
    # let's check for stuff in our working directory.
    pushd $wd > /dev/null

    # does our working dir have an .idea directory?
    if [ -d ".idea" ]; then
#      echo "opening via the .idea dir"
      open -a "$IDEA" .

    # is there an IDEA project file?
    elif [ -f *.ipr ]; then
#      echo "opening via the project file"
      open -a "$IDEA" `ls -1d *.ipr | head -n1`

    # Is there a pom.xml?
    elif [ -f pom.xml ]; then
#      echo "importing from pom"
      open -a "$IDEA" "pom.xml"

    # can't do anything smart; just open IDEA
#      echo 'cbf'
      open "$IDEA"

    popd > /dev/null

The GitHub gist version of this script. It does not offer the call to IDEA's diff though. I'm from an era where we did resolve > based diff conflicts in Notepad so that does not bother me much.

I think I'll go for Antonio's solution, that will avoid some nasty WTF moments when the preference directory moves and I will have forgotten all of this.

Categories: Blogs

Guide to java.util.concurrent.Locks

baeldung - Coding and Testing Stuff - Sun, 02/26/2017 - 19:42

1. Overview

Simply put, a lock is a more flexible and sophisticated thread synchronization mechanism than the standard synchronized block.

The Lock interface has been around since Java 1.5. It is defined inside the java.util.concurrent.lock package and it provides extensive operations for locking.

In this article, we’ll explore different implementations of the¬†Lock¬†interface and their¬†applications.

2. Differences between Lock and Synchronized block

There are few differences between the use of synchronized¬†block¬†and using¬†Lock¬†API’s:

  • A¬†synchronized¬†block¬†is fully contained within a method –¬†we can have¬†Lock¬†API’s¬†lock()¬†and¬†unlock()¬†operation in separate methods
  • A¬†synchronized¬†block¬†does not support the fairness, any thread can acquire the lock ones released, no preference can be specified.¬†We can achieve fairness within the¬†Lock¬†APIs by specifying the fairness property. It makes sure that longest waiting thread is given access to lock
  • A thread gets blocked¬†if it can’t get an access to the synchronized¬†block.¬†The¬†Lock¬†API provides¬†tryLock()¬†method. The thread acquires lock only if it’s available and not held by any other thread.¬†This reduces blocking time of thread waiting for the lock
  • A thread which is in “waiting” state to acquire the access to¬†synchronized block, can’t be interrupted.¬†The¬†Lock¬†API provides a method¬†lockInterruptibly()¬†which can be used to interrupt the thread when it is waiting for the lock
3. Lock API

Let’s take a look at the methods in the¬†Lock¬†interface:

  • void lock()¬†–¬†acquire the lock if it’s available; if the lock is not available a¬†thread gets blocked until the lock is released
  • void lockInterruptibly()¬†– this is similar to the¬†lock(),¬†but it allows the blocked thread to be interrupted and resume the execution through a thrown¬†java.lang.InterruptedException
  • boolean tryLock()¬†– this is non-blocking version of¬†lock()¬†method; it attempts to acquire the lock immediately, return true if locking succeeds
  • boolean tryLock(long timeout, TimeUnit¬†timeUnit)¬†–¬†this is similar to¬†tryLock(),¬†except it waits up the given timeout before giving up trying to acquire the¬†Lock
  • void¬†unlock()¬†– ¬†unlocks the¬†Lock¬†instance

A locked instance should always be unlocked to avoid deadlock condition. A recommended code block to use the lock should contain a try/catch and finally block:

Lock lock = ...; 
try {
    // access to the shared resource
} finally {

In addition to Lock interface, we have a ReadWriteLock interface which maintains a pair of locks, one for read-only operations, and one for the write operation. The read lock may be simultaneously held by multiple threads as long as there is no write.

ReadWriteLock declares methods to acquire read or write locks:

  • Lock readLock()¬†–¬†returns the lock that’s used for reading
  • Lock writeLock()¬†– returns the lock that’s used for writing
4. Lock implementations 4.1. ReentrantLock

ReentrantLock class implements the Lock interface. It offers the same concurrency and memory semantics, as the implicit monitor lock accessed using synchronized methods and statements, with extended capabilities.

Let’s see, how we can use¬†ReenrtantLock¬†for¬†synchronization:

public class SharedObject {
    ReentrantLock lock = new ReentrantLock();
    int counter = 0;

    public void perform() {
        try {
            // Critical section here
        } finally {

We need to make sure that we are wrapping the lock() and the unlock() calls in the try-finally block to avoid the deadlock situations.

Let’s see how the¬†tryLock()¬†works:

public void performTryLock(){
    boolean isLockAcquired = lock.tryLock(1, TimeUnit.SECONDS);
    if(isLockAcquired) {
        try {
            //Critical section here
        } finally {

In this case, the thread calling tryLock(), will wait for one second and will give up waiting if the lock is not available.

4.2. ReentrantReadWriteLock

ReentrantReadWriteLock class implements the ReadWriteLock interface.

Let’s see rules for acquiring the¬†ReadLock¬†or¬†WriteLock¬†by a thread:

  • Read Lock¬†– if no thread acquired the write lock or requested for it then multiple threads¬†can acquire the read lock
  • Write Lock¬†– if no threads are reading or writing then only one thread can acquire the write lock

Let’s see how to make use of the¬†ReadWriteLock:

public class SynchronizedHashMapWithReadWriteLock {

    Map<String,String>  syncHashMap = new HashMap<>();
    ReadWriteLock lock = new ReentrantReadWriteLock();
    Lock writeLock = lock.writeLock();

    public void put(String key, String value) {
        try {
            syncHashMap.put(key, value);
        } finally {
    public String remove(String key){
        try {
            return syncHashMap.remove(key);
        } finally {

For both the write methods, we need to surround the critical section with the write lock, only one thread can get access to it:

Lock readLock = lock.readLock();
public String get(String key){
    try {
        return syncHashMap.get(key);
    } finally {

public boolean containsKey(String key) {
    try {
        return syncHashMap.containsKey(key);
    } finally {

For both read methods, we need to surround the critical section with the read lock. Multiple threads can get access to this section if no write operation is in progress.

4.3. StampedLock

StampedLock is introduced in Java 8.  It also supports both read and write locks. However, lock acquisition methods returns a stamp that is used to release a lock or to check if the lock is still valid:

public class StampedLockDemo {
    Map<String,String> map = new HashMap<>();
    private StampedLock lock = new StampedLock();

    public void put(String key, String value){
        long stamp = lock.writeLock();
        try {
            map.put(key, value);
        } finally {

    public String get(String key) throws InterruptedException {
        long stamp = lock.readLock();
        try {
            return map.get(key);
        } finally {

Another feature provided by¬†StampedLock¬†is optimistic locking. Most of the time read operations doesn’t need to wait for write operation completion and as a result of this, the full fledged read lock is not required. Instead, we can upgrade to read lock:

public String readWithOptimisticLock(String key) {
    long stamp = lock.tryOptimisticRead();
    String value = map.get(key);

    if(!lock.validate(stamp)) {
        stamp = lock.readLock();
        try {
            return map.get(key);
        } finally {
    return value;
5. Working with Conditions

The Condition class provides the ability for a thread to wait for some condition to occur while executing the critical section.

This can occur when a thread acquires the access to the critical section but doesn’t have the necessary condition to perform its operation. For example, a reader thread can get access to the lock of a shared queue, which still doesn’t have any data to consume.

Traditionally Java provides wait(), notify() and notifyAll() methods for thread intercommunication. Conditions have similar mechanisms, but in addition, we can specify multiple conditions:

public class ReentrantLockWithCondition {

    Stack<String> stack = new Stack<>();
    int CAPACITY = 5;

    ReentrantLock lock = new ReentrantLock();
    Condition stackEmptyCondition = lock.newCondition();
    Condition stackFullCondition = lock.newCondition();

    public void pushToStack(String item){
        try {
            while(stack.size() == CAPACITY){
        } finally {

    public String popFromStack() {
        try {
            while(stack.size() == 0){
            return stack.pop();
        } finally {
6. Conclusion

In this article, we have seen different implementations of the Lock interface and the newly introduced StampedLock class. We also explored how we can make use of the Condition class to work with multiple conditions.

The complete code for this tutorial is available over on GitHub.

Categories: Blogs

AWS Lambda Using DynamoDB With Java

baeldung - Coding and Testing Stuff - Sun, 02/26/2017 - 16:19
1. Introduction

AWS Lambda is serverless computing service provided by Amazon Web Services and WS DynamoDB is a NoSQL database service also provided by Amazon.

Interestingly, DynamoDB supports both document store and key-value store and is fully managed by AWS.

Before we start, note that this tutorial requires a valid AWS account (you can create one here). Also, it’s a good idea to first read the¬†AWS Lambda with Java¬†article.

2. Maven Dependencies

To enable lambda we need the following dependency which can be found on Maven Central:


To use different AWS resources we need the following dependency which also can also be found on Maven Central:


And to build the application, we’re going to use the Maven Shade Plugin:

3. Lambda Code

There are different ways of creating handlers in a lambda application:

  • MethodHandler
  • RequestHandler
  • RequestStreamHandler

We will use RequestHandler interface in our application. We’ll accept the PersonRequest¬†in JSON format, and the response will be PersonResponse also in JSON format:

public class PersonRequest {
    private String firstName;
    private String lastName;
    // standard getters and setters
public class PersonResponse {
    private String message;
    // standard getters and setters

Next is our entry point class which will implement RequestHandler interface as:

public class SavePersonHandler 
  implements RequestHandler<PersonRequest, PersonResponse> {
    private DynamoDB dynamoDb;
    private String DYNAMODB_TABLE_NAME = "Person";
    private Regions REGION = Regions.US_WEST_2;

    public PersonResponse handleRequest(
      PersonRequest personRequest, Context context) {


        PersonResponse personResponse = new PersonResponse();
        personResponse.setMessage("Saved Successfully!!!");
        return personResponse;

    private PutItemOutcome persistData(PersonRequest personRequest) 
      throws ConditionalCheckFailedException {
        return this.dynamoDb.getTable(DYNAMODB_TABLE_NAME)
            new PutItemSpec().withItem(new Item()
              .withString("firstName", personRequest.getFirstName())
              .withString("lastName", personRequest.getLastName());

    private void initDynamoDbClient() {
        AmazonDynamoDBClient client = new AmazonDynamoDBClient();
        this.dynamoDb = new DynamoDB(client);

Here when we implement the RequestHandler interface, we need to implement handleRequest() for the actual processing of the request. As for the rest of the code, we have:

  • PersonRequest object – which will contain the request values passed in JSON format
  • Context object – ¬†used to get information from¬†lambda execution environment
  • PersonResponse –¬†which is¬†the response object for the lambda request

When creating a DynamoDB object, we’ll first create the AmazonDynamoDBClient object and use that to create a DynamoDB object. Note that the region is mandatory.

To add items in DynamoDB table, we’ll make use of a PutItemSpec object – by specifying the number of columns and their values.

We don’t need any predefined schema in DynamoDB table, we just need to define the Primary Key column name, which is “id” in our case.

4. Building the Deployment File

To build the lambda application, we need to execute the following Maven command:

mvn clean package shade:shade

Lambda application will be compiled and packaged into a jar file under the target folder.

5. Creating the DynamoDB Table

Follow these steps to create the DynamoDB table:

  • Login to AWS Account
  • Click¬†“DynamoDB” that can be located under “All Services”
  • This page will show already created DynamoDB tables (if any)
  • Click “Create Table” button
  • Provide “Table name” and “Primary Key” with its datatype as “Number”
  • Click on “Create” button
  • Table will be created
6. Creating the Lambda Function

Follow these steps to create the Lambda function:

  • Login to AWS Account
  • Click¬†“Lambda” that can be located under “All Services”
  • This page will show already created Lambda¬†Function¬†(if any) or no lambda functions are created click on “Get Started Now”
  • ‚ÄúSelect blueprint‚Ä̬†->¬†Select ‚ÄúBlank Function‚ÄĚ
  • “Configure triggers”¬†-> Click “Next” button
  • “Configure function”
    • “Name”: SavePerson
    • “Description”: Save Person to DDB
    • “Runtime”: Select “Java 8”
    • “Upload”: Click “Upload” button and select the jar file of lambda application
  • “Handler”: com.baeldung.lambda.dynamodb.SavePersonHandler
  • “Role”: Select “Create a custom role”
  • A new window will pop and will allow configuring IAM role for lambda execution and we need to add the DynamoDB grants in it. Once done, click “Allow” button
  • Click “Next” button
  • “Review”: Review the configuration
  • Click “Create function” button
7. Testing the Lambda Function

Next step is to test the lambda function:

  • Click the “Test” button
  • The “Input test event” window will be shown. Here, we’ll provide the JSON input for our¬†request:
  "id": 1,
  "firstName": "John",
  "lastName": "Doe",
  "age": 30,
  "address": "United States"
  • Click “Save and test” or “Save” button
  • The output can be seen on “Execution result” section:
  "message": "Saved Successfully!!!"
  • We also need to check in DynamoDB that the record is persisted:
    • Go to “DynamoDB” Management Console
    • Select the table “Person”
    • Select the “Items” tab
    • Here you can see the person’s details which were being passed in request to lambda application
  • So the request is successfully processed by our lambda application
8. Conclusion

In this quick article, we have learned how to create Lambda application with DynamoDB and Java 8. The detailed instructions should give you a head start in setting everything up.

And, as always, the full source code for the example app can be found over on Github.

Categories: Blogs

Software at Delta Air Lines

A few days ago, not long after flying with Delta to Atlanta for DevNexus, and benefiting from a Delta upgrade and bathing in the luxury of Delta business class, I spent some time at Delta's Operational Control Center (OCC).

I was able to do that thanks to Graeme Ingleby, a senior developer at Delta who has been exploring the benefits of the NetBeans Platform for quite some time and has attended JavaOne over the past years, including related events such as NetBeans Day and other NetBeans social events at JavaOne.

Delta, of course, is one of the key international organizations based in Atlanta, as well as Coca Cola and CNN. The terrain of the OCC is large and diverse and includes a museum being built within a Delta plane, shown below:

I was given an inspiring tour throughout the OCC, by Ben Shermer, General Manager, Flight Control. The OCC handles absolutely everything you can think of in relation to Delta operations. Everything, absolutely everything, in relation to aircraft, crew, and passengers is managed from the OCC, a very small part of which is shown below:

For example, the OCC handles aircraft maintenance, hotel bookings for aircraft crew, emergencies such as death or illness of passengers on planes, boarding procedures, and more. Much more. Everything, in fact, all over the world, connected to anything to do with Delta is handled in the OCC in Atlanta. 

Each computer in the OCC has a light on top of it which, when switched on, indicates that the operator is on the phone. I was told during the tour that the head of the OCC is happiest when all the lights are switched off and when all the operators have their feet up on their desks while reading their newspapers‚ÄĒsince that means that there isn't an emergency of some kind being handled.

For me, as a developer, the most interesting part of the day was seeing the application below: 

What you see above is, yes, a Java Swing application. The dominant elements are a JTable and, along the bottom, some JFreeCharts. All the data of all planes, crews, and passengers are received and monitored in this application. Someone sick on a plane? Flight delays? Snow storms? Crew hotel bookings? Current percentage of boarded passengers? Everything is displayed in one of the columns of this highly customized JTable.

The application above is named "Bridge Desktop". It is one of dozens of applications in use at the OCC. And that's precisely the problem of the OCC. The software across the OCC handles multiple different use cases and the applications have multiple different histories, coming from a variety of different organizations historically over time. Some duplicate the functionality of other software. Cut/paste and drag/drop between these applications is difficult to impossible, while multiple monitor support is an essential requirement, since as you can imagine, each operator is looking at about six different screens all at the same time.

How to integrate these different applications is the big problem of Delta. Some of the applications are Java, some C/C++, some web-based, etc. Each has different requirements and demands. Bridge Desktop, for example, has as its central component a highly customized JTable, which has taken years to develop to the point where it is now, both in terms of content and functionality. For example, multi-select across rows in tables has been built in, with a rules engine underneath it all, and features for comparing disparate data sets. There's no point in moving this application to JavaFX, since that JTable would need to be rewritten and the benefit of JavaFX in this context is severely limited, especially when weighed against the cost of the rewrite.

And a web-based solution would also not bring anything of benefit versus the cost of moving the application into the browser. One could imagine an interactive dashboard of some kind, to replace the JTable. In principle that sounds like a cool thing, while in reality that isn't a requirement for this piece of software. The operators using the JTable-based solution know how it works and understand it. The slick look and feel that a web-based dashboard would provide sounds completely valid in principle, as would purchasing an off-the-shelf solution sound like a logical thing to do. However, off-the-shelf solutions don't work in these highly customized contexts and, though attempts are always being made along those lines, they inevitably fail. Of course, there's continual pressure for a web-based solution, not from users or developers, but from managers. Not a new story at all, though interesting to see replicated again at Delta.

After discussing all these kinds of interesting challenges, I was given a tour of the flight simulators, see below: 

It was a brilliant time and I learned a lot and came out of it affirmed in several opinions I've had for many years. Of course, the NetBeans Platform is being evaluated as a mechanism for integrating the variety of software solutions throughout Delta. It's simply the right tool for the job in this context. 

Thanks again Graeme Ingleby as well as Ben Shermer for the inspiring and enthusiastic tour around the OCC.

Categories: Open Source

Introduction to RabbitMQ

baeldung - Coding and Testing Stuff - Sun, 02/26/2017 - 09:31
1. Overview

Decoupling of software components is one of the most important parts of software design. One way of achieving this is using messaging systems, which provide an asynchronous way of communication between components (services). In this article, we will cover one of such systems: RabbitMQ.

RabbitMQ is a message broker that implements Advanced Message Queuing Protocol (AMQP). It provides client libraries for major programming languages.

Besides using for decoupling software components RabbitMQ can be used for:

  • Performing background operations
  • Performing asynchronous operation
2. Messaging Model

First, let’s have a quick, high-level look at how messaging works.

Simply put, there are two kinds of applications interacting with a messaging system: producers and consumers. Producers are those, who sends (publishes) messages to a broker, and consumers, who receive messages from the broker. Usually, this programs (software components) are running on different machines and RabbitMQ acts as a communication middleware between them.

In this article, we will discuss a simple example with two services which will communicate using RabbitMQ. One of the services will publish messages to RabbitMQ and the other one will consume.

3. Setup

For the beginning¬†let’s run RabbitMQ using official setup guide¬†here.

We’ll naturally use the Java client for interacting with RabbitMQ server; the Maven dependency for this client is:


After running the RabbitMQ broker using the official guide, we need to connect to it using java client:

ConnectionFactory factory = new ConnectionFactory();
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();

We use the ConnectionFactory to setup the connection with the server, it takes care of the protocol (AMQP) and authentication as well. Here we connect to the server on localhost, we can modify the host name by using the setHost function.

We can use setPort to set the port if the default port is not used by the RabbitMQ Server; the default port for RabbitMQ is 15672:


We can set username and the password:


Further, we will use this connection for publishing and consuming messages.

4. Producer

Consider a simple scenario where a web application allows users to add new products to a website. Any time when new product added, we need to send an email to customers.

First, let’s define a queue:

channel.queueDeclare("products_queue", false, false, false, null);

Each time when users add a new product, we will publish a message to a queue:

String message = "product details"; 
channel.basicPublish("", "products_queue", null, message.getBytes());

Lastly, we close the channel and the connection:


This message will be consumed by another service, which is responsible for sending emails to customers.

5. Consumer

Let’s see what we can implement the consumer side; we’re going to declare the same queue:

channel.queueDeclare("products_queue", false, false, false, null);

Here’s how we define the consumer that will process messages from queue¬†asynchronously:

Consumer consumer = new DefaultConsumer(channel) {
     public void handleDelivery(
        String consumerTag,
        Envelope envelope, 
        AMQP.BasicProperties properties, 
        byte[] body) throws IOException {
            String message = new String(body, "UTF-8");
            // process the message
channel.basicConsume("products_queue", true, consumer);
6. Conclusion

This simple article covered basic concepts of RabbitMQ and discussed a simple example using it.

The full implementation of this tutorial can be found in the GitHub project.

Categories: Blogs

Spring Tips: Apache MyBatis [Video]

Speaker: Josh Long

Hi Spring fans! In this tip, we’ll look at mapping objects to and from SQL using Apache MyBatis and Spring Boot.

Categories: Communities

Introduction to Cobertura

baeldung - Coding and Testing Stuff - Sat, 02/25/2017 - 17:17
1. Overview

In this article, we will demonstrate several aspects of generating code coverage reports using Cobertura.

Simply put, Cobertura is a reporting tool that calculates test coverage for a codebase – the percentage of branches/lines accessed by unit tests in a Java project.

2. Maven Plugin 2.1. Maven Configuration

In order to start calculating code coverage in your Java project, you need to declare the Cobertura Maven plugin in your pom.xml file under the reporting section:


You can always check the latest version of the plugin in the Maven central repository.

Once done, go ahead and run Maven specifying cobertura:cobertura as a goal.

This will create a detailed HTML style report showing code coverage statistics gathered via code instrumentation:

The line coverage metric shows how many statements are executed in the Unit Tests run, while the branch coverage metric focuses on how many branches are covered by those tests.

For each conditional, you have two branches, so basically, you’ll end up having twice as many branches as conditionals.

The complexity factor reflects the complexity of the code¬†‚ÄĒ it goes up when the number of branches in code increases.

In theory, the more branches you have, the more tests you need to implement in order to increase the branch coverage score.

2.2. Configuring Code Coverage Calculation and Checks

You can ignore/exclude a specific set of classes from code instrumentation using the ignore and the exclude tags:


After calculating the code coverage comes the check phase. The check phase ensures that a certain level of code coverage is reached.

Here’s a basic example on how to configure the check phase:


When using the haltOnFailure flag, Cobertura will cause the build to fail if one of the specified checks fail.

The branchRate/lineRate tags specify the minimum acceptable branch/line coverage score required after code instrumentation. These checks can be expanded to the package level using the packageLineRate/packageBranchRate tags.

It is also possible to declare specific rule checks for classes with names following a specific pattern by using the regex tag. In the example above, we ensure that a specific line/branch coverage score must be reached for classes in the com.baeldung.algorithms.dijkstra package and below.

3. Eclipse Plugin 3.1. Installation

Cobertura is also available as an Eclipse plugin called eCobertura. In order to install eCobertura for Eclipse, you need to follow the steps below and have Eclipse version 3.5 or greater installed:

Step 1: From the Eclipse menu, select Help → Install New Software. Then, at the work with the field, enter

Step 2:¬†Select eCobertura Code Coverage, click “next”, and then follow the steps in the installation wizard.

Now that eCobertura is installed, restart Eclipse and show the coverage session view under Windows → Show View → Other → Cobertura.

3.2. Using Eclipse Kepler or Later

For the newer version of Eclipse (Kepler, Luna, etc.), the installation of eCobertura may cause some problems related to JUnit ‚ÄĒ¬†the newer version of JUnit packaged with Eclipse is not fully compatible with eCobertura‘s dependencies checker:

Cannot complete the install because one or more required items could not be found.
  Software being installed: eCobertura (
  Missing requirement: eCobertura UI (ecobertura.ui requires 'bundle org.junit4 0.0.0' but it could not be found
  Cannot satisfy dependency:
    From: eCobertura 
    To: ecobertura.ui []

As a workaround, you can download an older version JUnit and place it into the Eclipse plugins folder.

This can be done by deleting the folder org.junit.*** from %ECLIPSE_HOME%/plugins, and then copying the same folder from an older Eclipse installation that is compatible with eCobertura.

Once done, restart your Eclipse IDE and re-install the plugin using the corresponding update site.

3.3. Code Coverage Reports in Eclipse

In order to calculate code coverage by a Unit Test, right-click your project/test to open the context menu, then choose the option Cover As ‚Üí JUnit Test.

Under the Coverage Session view, you can check the line/branch coverage report per class:

Java 8 users may encounter a common error when calculating code coverage:

java.lang.VerifyError: Expecting a stackmap frame at branch target ...

In this case, Java is complaining about some methods not having a proper stack map, due to the stricter bytecode verifier introduced in newer versions of Java.

This issue can be solved by disabling verification in the Java Virtual Machine.

To do so, right-click your project to open the context menu, select Cover As, and then open the Coverage Configurations view. In the arguments tab, add the -noverify flag as a VM argument. Finally, click on the coverage button to launch coverage calculation.

You can also use the flag -XX:-UseSplitVerifier, but this only works with Java 6 and 7, as the split verifier is no longer supported in Java 8.

4. Conclusion

In this article, we have shown briefly how to use Cobertura to calculate code coverage in a Java project. We have also described the steps required to install eCobertura in your Eclipse environment.

Cobertura is a great yet simple code coverage tool, but not actively maintained, as it is currently outclassed by newer and more powerful tools like JaCoCo.

Finally, you can check out the example provided in this article in the GitHub project.

Categories: Blogs

Introduction to jOOL

baeldung - Coding and Testing Stuff - Sat, 02/25/2017 - 12:17
1. Overview

In this article, we will be looking at the jOOL library Рanother product from jOOQ.

2. Maven Dependency

Let’s start by adding a Maven dependency to¬†your pom.xml:


You can find the latest version here.

3. Functional Interfaces

In Java 8, functional interfaces are quite limited. They accept the maximum number of two parameters and do not have many additional features.

jOOL fixes that by proving a set of new functional interfaces that can accept even 16 parameters (from Function1 up to Function16) and are enriched with additional handy methods.

For example, to create a function that takes three arguments, we can use Function3:

Function3<String, String, String, Integer> lengthSum
  = (v1, v2, v3) -> v1.length() + v2.length() + v3.length();

In pure Java, you would need to implement it by yourself. Besides that, functional interfaces from jOOL have a method applyPartially() that allows us to perform a partial application easily:

Function2<Integer, Integer, Integer> addTwoNumbers = (v1, v2) -> v1 + v2;
Function1<Integer, Integer> addToTwo = addTwoNumbers.applyPartially(2);

Integer result = addToTwo.apply(5);

assertEquals(result, (Integer) 7);

When we have a method that is of a Function2 type, we can transform it easily to a standard Java BiFunction by using a toBiFunction() method:

BiFunction biFunc = addTwoNumbers.toBiFunction();

Similarly, there is a toFunction() method in Function1 type.

4. Tuples

A tuple is a very important construct in a functional¬†programming world. It’s a typed container¬†for values where each value can have a different type. Tuples are often used as function arguments.

They’re also very useful when doing transformations on a¬†stream of events.¬†In jOOL, we have tuples that can wrap from one up to sixteen values,¬†provided by¬†Tuple1 up to Tuple16¬†types:

tuple(2, 2)

And for four values:


Let’s consider an example when¬†we have a sequence of tuples that carried 3 values:

Seq<Tuple3<String, String, Integer>> personDetails = Seq.of(
  tuple("michael", "similar", 49),
  tuple("jodie", "variable", 43));
Tuple2<String, String> tuple = tuple("winter", "summer");

List<Tuple4<String, String, String, String>> result = personDetails
  .map(t -> t.limit2().concat(tuple)).toList();

  Arrays.asList(tuple("michael", "similar", "winter", "summer"), tuple("jodie", "variable", "winter", "summer"))

We can use different kinds of transformations on tuples. First, we call a limit2() method to take only two values from Tuple3. Then, we are calling a concat() method to concatenate two tuples.

In the result, we get values that are of a Tuple4 type.

5. Seq 

The Seq construct adds higher-level methods on a Stream while often uses its methods underneath.

5.1. Contains Operations

We can find a couple variants of methods checking for a presence of elements in a Seq. Some of those methods use an anyMatch() method from a Stream class:

assertTrue(Seq.of(1, 2, 3, 4).contains(2));

assertTrue(Seq.of(1, 2, 3, 4).containsAll(2, 3));

assertTrue(Seq.of(1, 2, 3, 4).containsAny(2, 5));
5.2. Join Operations

When we have two streams and we want to join them (similar to a SQL join operation of two datasets), using a standard Stream class is not a very elegant way to do this:

Stream<Integer> left = Stream.of(1, 2, 4);
Stream<Integer> right = Stream.of(1, 2, 3);

List<Integer> rightCollected = right.collect(Collectors.toList());
List<Integer> collect = left

assertEquals(collect, Arrays.asList(1, 2));

We need to collect right stream to a list, to prevent java.lang.IllegalStateException: stream has already been operated upon or closed. Next, we need to make a side effect operation by accessing a rightCollected list from a filter method. It is error prone and not elegant way to join two data sets.

Fortunately, Seq has useful methods to do inner, left and right joins on data sets. Those methods hide an implementation of it exposing elegant API.

We can do an inner join by using an innerJoin() method:

  Seq.of(1, 2, 4).innerJoin(Seq.of(1, 2, 3), (a, b) -> a == b).toList(),
  Arrays.asList(tuple(1, 1), tuple(2, 2))

We can do right and left joins accordingly:

  Seq.of(1, 2, 4).leftOuterJoin(Seq.of(1, 2, 3), (a, b) -> a == b).toList(),
  Arrays.asList(tuple(1, 1), tuple(2, 2), tuple(4, null))

  Seq.of(1, 2, 4).rightOuterJoin(Seq.of(1, 2, 3), (a, b) -> a == b).toList(),
  Arrays.asList(tuple(1, 1), tuple(2, 2), tuple(null, 3))

There is even a crossJoin() method that makes possible to make a cartesian join of two datasets:

  Seq.of(1, 2).crossJoin(Seq.of("A", "B")).toList(),
  Arrays.asList(tuple(1, "A"), tuple(1, "B"), tuple(2, "A"), tuple(2, "B"))
5.3. Manipulating a Seq

Seq¬†has many useful methods for manipulating sequences of elements. Let’s look at some of them.

We can use a cycle() method to take repeatedly elements from a source sequence. It will create an infinite stream, so we need to be careful when collecting results to a list thus we need to use a limit() method to transform infinite sequence into finite one:

  Seq.of(1, 2, 3).cycle().limit(9).toList(),
  Arrays.asList(1, 2, 3, 1, 2, 3, 1, 2, 3)

Let’s say that we want to duplicate all elements from one sequence to the second sequence. The duplicate()¬†method does exactly that:

  Seq.of(1, 2, 3).duplicate().map((first, second) -> tuple(first.toList(), second.toList())),
  tuple(Arrays.asList(1, 2, 3), Arrays.asList(1, 2, 3))

Returning type of a duplicate() method is a tuple of two sequences.

Let’s say that we have a sequence of integers and we want to split that sequence into two sequences using some predicate. We can use a partition() method:

  Seq.of(1, 2, 3, 4).partition(i -> i > 2)
    .map((first, second) -> tuple(first.toList(), second.toList())),
  tuple(Arrays.asList(3, 4), Arrays.asList(1, 2))
5.4. Grouping Elements

Grouping elements by a key using the Stream API is cumbersome and non-intuitive Рbecause we need to use collect() method with a Collectors.groupingBy collector.

Seq hides that code behind a groupBy() method that returns Map so there is no need to use a collect() method explicitly:

Map<Integer, List<Integer>> expectedAfterGroupBy = new HashMap<>();
expectedAfterGroupBy.put(1, Arrays.asList(1, 3));
expectedAfterGroupBy.put(0, Arrays.asList(2, 4));

  Seq.of(1, 2, 3, 4).groupBy(i -> i % 2),
5.5. Skipping Elements

Let’s say that we have a¬†sequence of elements and we want to skip elements while a predicate is not matched. When a¬†predicate is satisfied, elements should land in a resulting sequence.

We can use a skipWhile() method for that:

  Seq.of(1, 2, 3, 4, 5).skipWhile(i -> i < 3).toList(),
  Arrays.asList(3, 4, 5)

We can achieve the same result using a skipUntil() method:

  Seq.of(1, 2, 3, 4, 5).skipUntil(i -> i == 3).toList(),
  Arrays.asList(3, 4, 5)
5.6. Zipping Sequences

When we’re processing sequences of elements, often there is a need to zip them into one sequence.

The zip() API that could be used to zip two sequences into one:

  Seq.of(1, 2, 3).zip(Seq.of("a", "b", "c")).toList(),
  Arrays.asList(tuple(1, "a"), tuple(2, "b"), tuple(3, "c"))

The resulting sequence contains tuples of two elements.

When we are zipping two sequences, but we want to zip them in a specific way we can pass a BiFunction to a zip() method that defines the way of zipping elements:

  Seq.of(1, 2, 3).zip(Seq.of("a", "b", "c"), (x, y) -> x + ":" + y).toList(),
  Arrays.asList("1:a", "2:b", "3:c")

Sometimes, it is useful to zip sequence with an index of elements in this sequence, via the zipWithIndex() API:

  Seq.of("a", "b", "c").zipWithIndex().toList(),
  Arrays.asList(tuple("a", 0L), tuple("b", 1L), tuple("c", 2L))
6. Converting Checked Exceptions to Unchecked

Let’s say that we have a method that takes a string and can throw a checked exception:

public Integer methodThatThrowsChecked(String arg) throws Exception {
    return arg.length();

Then we want to map elements of a Stream applying that method to each element. There is no way to handle that exception higher so we need to handle that exception in a map() method:

List<Integer> collect = Stream.of("a", "b", "c").map(elem -> {
    try {
        return methodThatThrowsChecked(elem);
    } catch (Exception e) {
        throw new RuntimeException(e);

    Arrays.asList(1, 1, 1)

There is not much we can do with that exception because of the design of functional interfaces in Java so in a catch clause, we are converting a checked exception into unchecked one.

Fortunately, in a jOOL there is an Unchecked class that has methods that can convert checked exceptions into unchecked exceptions:

List<Integer> collect = Stream.of("a", "b", "c")
  .map(Unchecked.function(elem -> methodThatThrowsChecked(elem)))

  Arrays.asList(1, 1, 1)

We are wrapping a call to a methodThatThrowsChecked() into an Unchecked.function() method that handles converting of exceptions underneath.

7. Conclusion

This article shows how to use the jOOL library that adds useful additional methods to the Java standard Stream API.

The implementation of all these examples and code snippets can be found in the GitHub project ‚Äď this is a Maven project, so it should be easy to import and run as it is.

Categories: Blogs

Finding Max/Min of a List or Collection

baeldung - Coding and Testing Stuff - Sat, 02/25/2017 - 10:52
1. Introduction

A quick intro on how to find the min/max value from a given list/collection with the powerful Stream API in Java8.

2. Find Max in a List of Integers

We can use max() method provided through the java.util.Stream interface. It accepts a method reference:

public void whenListIsOfIntegerThenMaxCanBeDoneUsingIntegerComparator() {
    // given
    List<Integer> listOfIntegers = Arrays.asList(1, 2, 3, 4, 56, 7, 89, 10);
    Integer expectedResult = 89;

    // then
    Integer max = listOfIntegers
      .mapToInt(v -> v)

    assertEquals("Should be 89", expectedResult, max);

Let’s take a closer look at the code:

  1. Calling stream() method on the list to get a stream of values from the list
  2. Calling mapToInt(value -> value) on the stream to get an Integer Stream
  3. Calling max() method on the stream to get the max value
  4. Calling orElseThrow() to throw an exception if no value is received from max()
3. Find Min with Custom Objects

In order to find the min/max on custom objects, we can also provide a lambda expression for our preferred sorting logic.

Let’s first define the custom POJO:

class Person {
    String name;
    Integer age;
    // standard constructors, getters and setters

We want to find the Person object with the minimum age:

public void whenListIsOfPersonObjectThenMinCanBeDoneUsingCustomComparatorThroughLambda() {
    // given
    Person alex = new Person("Alex", 23);
    Person john = new Person("John", 40);
    Person peter = new Person("Peter", 32);
    List<Person> people = Arrays.asList(alex, john, peter);

    // then
    Person minByAge = people

    assertEquals("Should be Alex", alex, minByAge);

Let’s have a look at this logic:

  1. Calling stream() method on the list to get a stream of values from the list
  2. Calling min() method on the stream to get the minimum value. We are passing a lambda function as a comparator, this is used to decide the sorting logic for deciding the minimum value
  3. Calling orElseThrow() to throw an exception if no value is received from min()
4. Conclusion

In this quick article, we explored how the max()¬†and¬†min()¬†methods from Java 8’s Stream API can be used to find the¬†maximum and minimum value from a List/Collection.

As always, the code is available over on Github.

Categories: Blogs

Building a Spring Boot RestController to Search Redis

I’ve just started taking a look at using Redis. I wondered what it would look like to build a simple REST interface with Spring Boot. Spring Data Redis makes this pretty simple.

First up, you need to configure a @Bean in your @SpringBootApplication class (full source is on GitHub here):

Categories: Communities