Setting up sharded mongodb cluster in localhost

I have been playing around with MongoDb, thanks to the M101J Course offered by Mongodb University. These NoSQL datastores are gaining popularity due to a number of reasons and one among them being the ease with which they can be scaled out i.e horizontal scaling. This horizontal scaling in MongoDB can be achieved by creating a sharded cluster of mongodb instances.

You might want to understand the concept of sharding before continuing. The MongoDB reference has a very clear explanation for the same here.
Continue reading Setting up sharded mongodb cluster in localhost

Getting rid of Getters and Setters in your POJO

We all have read in Java books about encapsulation of fields in Java class and also when ever you code you are asked to take special care in encapsulating the fields and providing explicit Getters and Setters. And these are very strict instructions. Lets step back a bit and find out the reason behind encapsulating the fields. Its all done to have a control over the access and modification of the fields. One might want to allow the user of the class to access data from only few fields or control the update of data of the fields in the class and so on. And on other occassions the frameworks would need these getters and setters to populate your POJOs(Plain Old Java Objects).

Now the pain involved in adding these getters and setters is quite a bit and this pain has been reduced by the IDEs which allow you to generate the getters and setters for the fields. But these generated code make your class definition very verbose and hide the actual business logic, if any, which you might have it inside the class definition. There have been lot of ways by which you can get away with defining the getters and setters explicitly and I have even blogged about using Project Lombok to use annotations to declare the getters and setters. I have come across another approach to avoid defining the getters and setters and this approach doesn’t even auto generate the code or use annotations to define them. I am sure I have read this approach somewhere but unable to recall, so its something which has been used and I am trying to create an awareness among my readers about this approach via this blog post.

Let me first define the class with the getters and setters and then show how to get rid of them

class TaskWithGettersSetters {
  public TaskWithGettersSetters(String title, String notes,
      LocalDateTime deadline, String assignedTo) {
    this.title = title;
    this.notes = notes;
    this.addedOn = LocalDateTime.now();
    this.deadline = deadline;
    this.assignedTo = assignedTo;
  }

  public TaskWithGettersSetters() {
  }

  private String        title;
  private String        notes;
  private LocalDateTime addedOn;
  private LocalDateTime deadline;
  private String        assignedTo;

  public String getTitle() {
    return title;
  }

  public void setTitle(String title) {
    this.title = title;
  }

  public String getNotes() {
    return notes;
  }

  public void setNotes(String notes) {
    this.notes = notes;
  }

  public LocalDateTime getAddedOn() {
    return addedOn;
  }

  public void setAddedOn(LocalDateTime addedOn) {
    this.addedOn = addedOn;
  }

  public LocalDateTime getDeadline() {
    return deadline;
  }

  public void setDeadline(LocalDateTime deadline) {
    this.deadline = deadline;
  }

  public String getAssignedTo() {
    return assignedTo;
  }

  public void setAssignedTo(String assignedTo) {
    this.assignedTo = assignedTo;
  }

}

There is nothing to explain in the above code, pretty clear with fields being private and public getters and setters. The class definition is about 60 lines. Let see how we can define class without providing getters and setters:

class Task {

  public Task(String title, String notes, LocalDateTime deadline,
      String assignedTo) {
    this.title = title;
    this.notes = notes;
    this.addedOn = LocalDateTime.now();
    this.deadline = deadline;
    this.assignedTo = assignedTo;
  }

  public final String        title;
  public final String        notes;
  public final LocalDateTime addedOn;
  public final LocalDateTime deadline;
  public final String        assignedTo;

}

The above is what I call class definition on diet :) It is less verbose and is just 18 lines. You must be scared looking at the public modifiers for the fields and also confused looking at the final modifiers to the field. Let me explain the ideology behind this approach:

  1. As the fields are final they cannot be modified after initialized so we need not worry about the scare of data in the field getting modified. And we have to provide a constructor which will initialize these fields, otherwise compiler will shout at you for not understanding what final modifier is.
  2. The data in the fields can be accessed by using the fields directly and not via the getter methods.
  3. This approach enforces immutability of objects i.e if we have to update the field we have to create a new object with the updated value of the field.

Now having Immutable objects provides lots of advantages few of them being:

  • Writing concurrent code is quite easy because we need not worry about getting locks on the object as we are never going to modify the object, we can just read the object data and cannot modify due to the use of final.
  • Immutable objects leads to having lot of short lived objects which helps in reducing the GC overhead involved in managing long lived objects and objects with lot of live references.

We can even provide a factory method for creating instances of Task. Lets see the above class in action:

import java.time.LocalDateTime;

public class GettingRidOfGettersSettersDemo {
  public static void main(String[] args) {
    //One can make use of Factory method to initialize the data
    Task task1 = new Task("Task 1", "some notes", LocalDateTime.now().plusDays(5), "sana");
    //Very clean approach to access the field data - no getYYY() noise 
    System.out.println(task1.title + " assigned to " + task1.assignedTo);
    Task task2  = new Task("Task 2", "some notes", LocalDateTime.now().plusDays(6), "raj");
    System.out.println(task2.title + " assigned to " + task2.assignedTo);
  }
}

Update:
Thanks a lot for the comments and your thoughts both here and on DZone. I spent some time in identifying how one can work without the need for getters and setters in scenarios mentioned where without getters and setters its not possible. One such scenario is marsalling and unmarshalling of JSON and another scenario is where we have a List of some values as property and we need to give an read only access to the users of the object. The below are examples of using POJOs without getters and setters in JSON marshalling and unmarshalling using GSON and Jackson JSON libraries:

The below is the code for using GSON JSON Library:

public class GsonParserDemo {

  public static void main(String[] args) {
    HashMap<String, Object> jsonData = new HashMap<String, Object>();
    jsonData.put("name", "sanaulla");
    jsonData.put("place", "bangalore");
    jsonData.put("interests", Arrays.asList("blogging", "coding"));
    Gson gson = new Gson();

    String jsonString = gson.toJson(jsonData);
    System.out.println("From Map: " + jsonString);
    

    Person person = gson.fromJson(jsonString, Person.class);
    
    System.out.println("From Person.class: " + gson.toJson(person));
  }

  class Person {
    public final String name;
    public final String place;
    private final List<String> interests;

    public Person(String name, String place, List<String> interests) {
      this.name = name;
      this.place = place;
      this.interests = interests;
    }
 
    public List<String> interests(){
      return Collections.unmodifiableList(interests);
    }
  }
}

The output of above code is:

From Map: {"name":"sanaulla","place":"bangalore","interests":["blogging","coding"]}
From Person.class: {"name":"sanaulla","place":"bangalore","interests":["blogging","coding"]}

To note : GSON doesn’t use constructor nor getters and setters to map JSON to Java class.

The below is the code for using Jackson JSON Library:

public class JacksonParserDemo {
  public static void main(String[] args) throws JsonGenerationException,
      JsonMappingException, IOException {
    HashMap<String, String> jsonData = new HashMap<String, String>();
    jsonData.put("name", "sanaulla");
    jsonData.put("place", "bangalore");

    ObjectMapper objectMapper = new ObjectMapper();

    String jsonString = objectMapper.writeValueAsString(jsonData);
    System.out.println("Json from map : " + jsonString);

    Person person = objectMapper.readValue(jsonString, Person.class);
    System.out.println("Json from Person : "
        + objectMapper.writeValueAsString(person));
  }

}
class Person {
  
  public final String name;
  
  public final String place;

  @JsonCreator
  public Person(@JsonProperty("name") String name,
      @JsonProperty("place") String place) {
    this.name = name;
    this.place = place;
  }

}

The output of the above code is:

Json from map : {"name":"sanaulla","place":"bangalore"}
Json from Person : {"name":"sanaulla","place":"bangalore"}

I am investigating some concerns raised about Object Relational Mappers and the Joda Time.

Using Google Guava Cache for local caching

Lot of times we would have to fetch the data from a database or another webservice or load it from file system. In cases where it involves a network call there would be inherent network latencies, network bandwidth limitations. One of the approaches to overcome this is to have a cache local to the application.

If your application spans across multiple nodes then the cache will be local to each node causing inherent data inconsistency. This data inconsistency can be traded off for better throughput and lower latencies. But sometimes if the data inconsistency makes a significant difference then one can reduce the ttl (time to live) for the cache object thereby reducing the duration for which the data inconsistency can occur.

Among a number of approaches of implementing local cache, one which I have used in a high load environment is Guava cache. We used guava cache to serve 80,000+ requests per second. And the 90th percentile of the latencies were ~5ms. This helped us scale with the limited network bandwidth requirements.

In this post I will show how one can add a layer of Guava cache in order to avoid frequent network calls. For this I have picked a very simple example of fetching details of a book given its ISBN using the Google Books API.

A sample request for fetching book details using ISBN13 string is:
https://www.googleapis.com/books/v1/volumes?q=isbn:9781449370770&key={API_KEY}

The part of response which is useful for us looks like:
SampleResponse

A very detailed explanation on the features of Guava Cache can be found here. In this example I would be using a LoadingCache. The LoadingCache takes in a block of code which it uses to load the data into the cache for missing key. So when you do a get on cache with an non existent key, the LoadingCache will fetch the data using the CacheLoader and set it in cache and return it to the caller.

Lets now look at the model classes we would need for representing the book details:

  • Book class
  • Author class

The Book class is defined as:

//Book.java
package info.sanaulla.model;

import java.util.ArrayList;
import java.util.Date;
import java.util.List;

public class Book {
  private String isbn13;
  private List<Author> authors;
  private String publisher;
  private String title;
  private String summary;
  private Integer pageCount;
  private String publishedDate;

  public String getIsbn13() {
    return isbn13;
  }

  public void setIsbn13(String isbn13) {
    this.isbn13 = isbn13;
  }

  public List<Author> getAuthors() {
    return authors;
  }

  public void setAuthors(List<Author> authors) {
    this.authors = authors;
  }

  public String getPublisher() {
    return publisher;
  }

  public void setPublisher(String publisher) {
    this.publisher = publisher;
  }

  public String getTitle() {
    return title;
  }

  public void setTitle(String title) {
    this.title = title;
  }

  public String getSummary() {
    return summary;
  }

  public void setSummary(String summary) {
    this.summary = summary;
  }

  public void addAuthor(Author author){
    if ( authors == null ){
      authors = new ArrayList<Author>();
    }
    authors.add(author);
  }

  public Integer getPageCount() {
    return pageCount;
  }

  public void setPageCount(Integer pageCount) {
    this.pageCount = pageCount;
  }

  public String getPublishedDate() {
    return publishedDate;
  }

  public void setPublishedDate(String publishedDate) {
    this.publishedDate = publishedDate;
  }
}

And the Author class is defined as:

//Author.java
package info.sanaulla.model;

public class Author {

  private String name;

  public String getName() {
    return name;
  }

  public void setName(String name) {
    this.name = name;
  }

Lets now define a service which will fetch the data from the Google Books REST API and call it as BookService. This service does the following:

  1. Fetch the HTTP Response from the REST API.
  2. Using Jackson’s ObjectMapper to parse the JSON into a Map.
  3. Fetch relevant information from the Map obtained in step-2.

I have extracted out few operations from the BookService into an Util class namely:

  1. Reading the application.properties file which contains the Google Books API Key (I haven’t committed this file to git repository. But one can add this file in their src/main/resources folder and name that file as application.properties and the Util API will be able to read it for you)
  2. Making an HTTP request to REST API and returning the JSON response.

The below is how the Util class is defined:

//Util.java
 
package info.sanaulla;

import com.fasterxml.jackson.databind.ObjectMapper;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.ProtocolException;
import java.net.URL;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;

public class Util {

  private static ObjectMapper objectMapper = new ObjectMapper();
  private static Properties properties = null;

  public static ObjectMapper getObjectMapper(){
    return objectMapper;
  }

  public static Properties getProperties() throws IOException {
    if ( properties != null){
        return  properties;
    }
    properties = new Properties();
    InputStream inputStream = Util.class.getClassLoader().getResourceAsStream("application.properties");
    properties.load(inputStream);
    return properties;
  }

  public static String getHttpResponse(String urlStr) throws IOException {
    URL url = new URL(urlStr);
    HttpURLConnection conn = (HttpURLConnection) url.openConnection();
    conn.setRequestMethod("GET");
    conn.setRequestProperty("Accept", "application/json");
    conn.setConnectTimeout(5000);
    //conn.setReadTimeout(20000);

    if (conn.getResponseCode() != 200) {
      throw new RuntimeException("Failed : HTTP error code : "
              + conn.getResponseCode());
    }

    BufferedReader br = new BufferedReader(new InputStreamReader(
          (conn.getInputStream())));

    StringBuilder outputBuilder = new StringBuilder();
    String output;
    while ((output = br.readLine()) != null) {
      outputBuilder.append(output);
    }
    conn.disconnect();
    return outputBuilder.toString();
  }
}

And So our Service class looks like:

//BookService.java
package info.sanaulla.service;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.base.Optional;
import com.google.common.base.Strings;

import info.sanaulla.Constants;
import info.sanaulla.Util;
import info.sanaulla.model.Author;
import info.sanaulla.model.Book;

import java.io.IOException;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.Properties;

public class BookService {

  public static Optional<Book> getBookDetailsFromGoogleBooks(String isbn13) throws IOException{
    Properties properties = Util.getProperties();
    String key = properties.getProperty(Constants.GOOGLE_API_KEY);
    String url = "https://www.googleapis.com/books/v1/volumes?q=isbn:"+isbn13;
    String response = Util.getHttpResponse(url);
    Map bookMap = Util.getObjectMapper().readValue(response,Map.class);
    Object bookDataListObj = bookMap.get("items");
    Book book = null;
    if ( bookDataListObj == null || !(bookDataListObj instanceof List)){
      return Optional.fromNullable(book);
    }

    List bookDataList = (List)bookDataListObj;
    if ( bookDataList.size() < 1){
      return Optional.fromNullable(null);
    }

    Map bookData = (Map) bookDataList.get(0);
    Map volumeInfo = (Map)bookData.get("volumeInfo");
    book = new Book();
    book.setTitle(getFromJsonResponse(volumeInfo,"title",""));
    book.setPublisher(getFromJsonResponse(volumeInfo,"publisher",""));
    List authorDataList = (List)volumeInfo.get("authors");
    for(Object authorDataObj : authorDataList){
      Author author = new Author();
      author.setName(authorDataObj.toString());
      book.addAuthor(author);
    }
    book.setIsbn13(isbn13);
    book.setSummary(getFromJsonResponse(volumeInfo,"description",""));
    book.setPageCount(Integer.parseInt(getFromJsonResponse(volumeInfo, "pageCount", "0")));
    book.setPublishedDate(getFromJsonResponse(volumeInfo,"publishedDate",""));

    return Optional.fromNullable(book);
  }

  private static String getFromJsonResponse(Map jsonData, String key, String defaultValue){
    return Optional.fromNullable(jsonData.get(key)).or(defaultValue).toString();
  }
}

Adding caching on top of the Google Books API call

We can create a cache object using the CacheBuilder API provided by Guava library. It provides methods to set properties like

  • maximum items in cache,
  • time to live of the cache object based on its last write time or last access time,
  • ttl for refreshing the cache object,
  • recording stats on the cache like how many hits, misses, loading time and
  • providing a loader code to fetch the data in case of cache miss or cache refresh.

So what we would ideally want is that a cache miss should invoke our API written above i.e getBookDetailsFromGoogleBooks. And we would want to store maximum of 1000 items and expire the items after 24 hours. So the piece of code which builds the cache looks like:

private static LoadingCache<String, Optional<Book>> cache = CacheBuilder.newBuilder()
  .maximumSize(1000)
  .expireAfterAccess(24, TimeUnit.HOURS)
  .recordStats()
  .build(new CacheLoader<String, Optional<Book>>() {
      @Override
      public Optional<Book> load(String s) throws IOException {
          return getBookDetailsFromGoogleBooks(s);
      }
  });

Its important to note that the maximum items which you want to store in the cache impact the heap used by your application. So you have to carefully decide this value depending on the size of each object you are going to cache and the maximum heap memory allocated to your application.

Lets put this into action and also see how the cache stats report the stats:

package info.sanaulla;

import com.google.common.cache.CacheStats;
import info.sanaulla.model.Book;
import info.sanaulla.service.BookService;

import java.io.IOException;
import java.util.Properties;
import java.util.concurrent.ExecutionException;

public class App 
{
  public static void main( String[] args ) throws IOException, ExecutionException {
    Book book = BookService.getBookDetails("9780596009205").get();
    System.out.println(Util.getObjectMapper().writeValueAsString(book));
    book = BookService.getBookDetails("9780596009205").get();
    book = BookService.getBookDetails("9780596009205").get();
    book = BookService.getBookDetails("9780596009205").get();
    book = BookService.getBookDetails("9780596009205").get();
    CacheStats cacheStats = BookService.getCacheStats();
    System.out.println(cacheStats.toString());
  }
}

And the output we would get is:

{"isbn13":"9780596009205","authors":[{"name":"Kathy Sierra"},{"name":"Bert Bates"}],"publisher":"\"O'Reilly Media, Inc.\"","title":"Head First Java","summary":"An interactive guide to the fundamentals of the Java programming language utilizes icons, cartoons, and numerous other visual aids to introduce the features and functions of Java and to teach the principles of designing and writing Java programs.","pageCount":688,"publishedDate":"2005-02-09"}
CacheStats{hitCount=4, missCount=1, loadSuccessCount=1, loadExceptionCount=0, totalLoadTime=3744128770, evictionCount=0}

This is a very basic usage of Guava cache and I wrote it as I was learning to use this. In this I have made use of other Guava APIs like Optional which helps in wrapping around existent or non-existent(null) values into objects. This code is available on git hub- https://github.com/sanaulla123/Guava-Cache-Demo. There will be concerns such as how it handles concurrency which I havent gone detail into. But under the hood it uses a segmented Concurrent hash map such that the gets are always non-blocking, but the number of concurrent writes would be decided by the number of segments.

Some of the useful links related to this:
http://guava-libraries.googlecode.com/files/ConcurrentCachingAtGoogle.pdf

Very useful Console Window for windows

ConEmu (http://www.fosshub.com/ConEmu.html) is a very useful and much much better Console window for Windows. It acts as a facade over the cmd.exe. There are loads of good things in it and highly recommended for programmers who are windows users.

If you have installed Cygwin, you can integrate bash with your ComEmu terminal so that you can open a new tab running your bash shell :). On similar lines you can open new tabs running Git shell if you have installed git-shell.

To integrate Cygwin bash –
1. Open the setting – Windows + Alt + P
2. Select Startup -> Tasks
3. Click on “+” to add a new task
4. Name it as “Cygwin bash” or “cygwin” which ever is comfortable.
5. In the commands text area paste this: “%SystemDrive%\cygwin64\bin\bash.exe” –login -i
6. Save and exit the settings.

Book Review: Murach’s Java Servlets And JSP 3rd Edition

Murach’s Java Servlets and JSP is the ONLY book you need to learn Web App Development in Java using JSP and Servlets. The book covers all the concepts required you to build a complete Web application in Java. You will find topics covering:
– UI Development using HTML, Javascript
– Building Servlets for handling requests.
– Using JSP to create UI templates
– Building Data access layer to communicate with DB.

Its a completely hands on guide and just mere reading will be of no help. The book also covers concepts and techniques related to secure programming and also some advanced concepts in servlets.

Few among numerous salient features:
Completely hands on guide
Highly suitable for people who are familiar with Java language
Focuses on best practices where ever relevant. For example in the chapters which explains about JSP there is a guideline not to mix java code with JSP and instead make use of JSP Template Library.
Lot of reusable code snippets – useful if someone is looking to implement a subset of the feature explained.

I would highly recommend this book to –
“Any developer familiar with Java programming language looking to learn Web application development using Servlets and JSP”.

One can purchase the latest edition here [though currently only imported edition is available].

PS: I got a copy of the book in return for the review.

Book Review: RESTful Web Services with Dropwizard by Alexandros Dallas

RESTful Web Services with Dropwizard` by Alexandros Dallas is a good guide to get started with dropwizard. Dropwizard

The author covers most of the features of dropwizard which includes creating RESTful End points, Database access, Authentication, Creating views by means of developing a sample Phonebook application.

The book is not an exhaustive guide to dropwizard and may not match the documentation provided on the Dropwizard official site. But the approach of developing a sample application adding features in each chapter might appeal to some readers.

If you are kind of person who likes learning a framework by developing a sample application, then you can pick this book. Otherwise I would suggest the documentation would suffice.

Book Review: RabbitMQ Essentials by David Dossot

I have been working on integrating with RabbitMQ to implement the messaging architecture. All the time I made use of the basic tutorials available on RabbitMQ site to wade through understanding different concepts around AMQP and RabbitMQ.RabbitMQ Essentials

Yesterday I got to read the RabbitMQ Essentials by David Dossot. Its a pretty short and totally hands on book. The good things about the book:

  • It picks a fictitious company and its requirements to develop a messaging feature. The author builds up the features very elegantly.
  • Author explicitly focuses on good practices, performance in the examples presented in the book
  • Also touches upon how messaging architecture can integrate heterogenous software systems with software pieces written in Ruby, Python, Java.
  • Liberal use of diagrams to explain the architecture and flow of messages

I already had familiarity with communication constructs of RabbitMQ and didn’t find it difficult to understand the content and intent of the author. It also helped me to understand few intricate aspects like Dead letter queue and how to handle them, handling mandatory messages, setting ttl on the messages, different exchange types like direct, topic and fanout.

I feel the book is good read for anyone who has started using RabbitMQ and has worked on integrating with the client API. This book will help you correct your implementation and also understand few gotchas which one would encounter in real life projects.

Parameterized Test Runner in JUnit

We all have written unit tests where in a single test tests for different possible input-output combinations. Lets look how its done by taking a simple fibonacci series example.

The below code computes the fibonacci series for number of elements mentioned

import java.math.BigInteger;
import java.util.ArrayList;
import java.util.List;

public class Fibonacci{

  public List<Integer> getFiboSeries(int numberOfElements) {
    List<Integer> fiboSeries = new ArrayList<>(numberOfElements);
    for (int i = 0; i < numberOfElements; i++) {
      //First 2 elements are 1,1
      if (i == 0 || i == 1) {
        fiboSeries.add(i, 1);
      } else {
        int firstPrev = fiboSeries.get(i - 2);
        int secondPrev = fiboSeries.get(i - 1);
        int fiboElement = firstPrev + secondPrev;
        fiboSeries.add(i, fiboElement);
      }
    }
    return fiboSeries;
  }

}

Lets see the conventional way of testing the above code with multiple input values

import java.util.List;
import org.junit.Test;
import java.util.Arrays;
import static org.junit.Assert.*;

public class FibonacciCachedTest {

  /**
   * Test of getFiboSeries method, of class Fibonacci.
   */
  @Test
  public void testGetFiboSeries() {
    System.out.println("getFiboSeries");
    int numberOfElements = 5;
    Fibonacci instance = new Fibonacci();
    List<Integer> expResult = Arrays.asList(1, 1, 2, 3, 5);
    List<Integer> result = instance.getFiboSeries(numberOfElements);
    assertEquals(expResult, result);

    numberOfElements = 10;
    expResult = Arrays.asList(1, 1, 2, 3, 5, 8, 13, 21, 34, 55);
    result = instance.getFiboSeries(numberOfElements);
    assertEquals(expResult, result);

  }
}

So we have been able to test for 2 inputs, imagine extending the above for more number of inputs? Unnecessary bloat up in the test code.

JUnit provides a different Runner called Parameterized runner which exposes a static method annotated with @Parameters. This method has to be implemented to return the inputs and expected output collection which will be used to run the test defined in the class. Lets look at the code which does this:

import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import static org.junit.Assert.assertEquals;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;

@RunWith(Parameterized.class)
public class ParametrizedFiboTest {

  private final int number;
  private final List<Integer> values;

  public ParametrizedFiboTest(FiboInput input) {
    this.number = input.number;
    this.values = input.values;
  }

  @Parameterized.Parameters
  public static Collection<Object[]> fiboData() {
    return Arrays.asList(new Object[][]{
      {new FiboInput(1, Arrays.asList(1))},
      {new FiboInput(2, Arrays.asList(1, 1))},
      {new FiboInput(3, Arrays.asList(1, 1, 2))},
      {new FiboInput(4, Arrays.asList(1, 1, 2, 3))},
      {new FiboInput(5, Arrays.asList(1, 1, 2, 3, 5))},
      {new FiboInput(6, Arrays.asList(1, 1, 2, 3, 5, 8))}
    });
  }

  @Test
  public void testGetFiboSeries() {
    FibonacciUncached instance = new FibonacciUncached();
    List<Integer> result = instance.getFiboSeries(this.number);
    assertEquals(this.values, result);
  }

}

class FiboInput {

  public int number;
  public List<Integer> values;

  public FiboInput(int number, List<Integer> values) {
    this.number = number;
    this.values = values;
  }
}

This way we would just need to add a new input and expected output in the fiboData() method to get this working!

Simple Aspect Oriented Programming (AOP) using CDI in JavaEE

We write service APIs which cater to certain business logic. There are few cross-cutting concerns that cover all service APIs like Security, Logging, Auditing, Measuring Latencies and so on. This is a repetitive non-business code which can be reused among other methods. One way to reuse is to move these repetitive code into its own methods and invoke them in the service APIs somethings like:

public class MyService{
   public ServiceModel service1(){
      isAuthorized();
      //execute business logic.
   }
}

public class MyAnotherService{
  public ServiceModel service1(){
    isAuthorized():
    //execute business logic. 
  }
}

The above approach will work but not without creating code noise, mixing cross-cutting concerns with the business logic. There is another approach to solve the above requirements which is by using Aspect and this approach is called Aspect Oriented Programming (AOP). There are a different ways you can make use of AOP – by using Spring AOP, JavaEE AOP. In this example I Will try to use AOP using CDI in Java EE applications. To explain this I have picked a very simple example of building a web application to fetch few records from Database and display in the browser.

Creating the Data access layer

The table structure is:

create table people(
    id INT NOT NULL AUTO_INCREMENT, 
    name varchar(100) NOT NULL,
    place varchar(100),
    primary key(id));

Lets create a Model class to hold a person information

package demo.model;
public class Person{
  private String id;
  private String name;
  private String place;
  public String getId(){ return id; } 
  public String setId(String id) { this.id = id;}
  public String getName(){ return name; } 
  public String setName(String name) { this.name = name;}
  public String getPlace(){ return place; } 
  public String setPlace(String place) { this.place = place;}
}

Lets create a Data Access Object which exposes two methods –

  1. to fetch the details of all the people
  2. to fetch the details of one person of given id
package demo.dao;

import demo.common.DatabaseConnectionManager;
import demo.model.Person;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.ArrayList;
import java.util.List;

public class PeopleDAO {

    public List<Person> getAllPeople() throws SQLException {
        String SQL = "SELECT * FROM people";
        Connection conn = DatabaseConnectionManager.getConnection();
        List<Person> people = new ArrayList<>();
        try (Statement statement = conn.createStatement();
                ResultSet rs = statement.executeQuery(SQL)) {
            while (rs.next()) {
                Person person = new Person();
                person.setId(rs.getString("id"));
                person.setName(rs.getString("name"));
                person.setPlace(rs.getString("place"));
                people.add(person);
            }
        }
        return people;
    }

    public Person getPerson(String id) throws SQLException {
        String SQL = "SELECT * FROM people WHERE id = ?";
        Connection conn = DatabaseConnectionManager.getConnection();
        try (PreparedStatement ps = conn.prepareStatement(SQL)) {
            ps.setString(1, id);
            try (ResultSet rs = ps.executeQuery()) {
                if (rs.next()) {
                    Person person = new Person();
                    person.setId(rs.getString("id"));
                    person.setName(rs.getString("name"));
                    person.setPlace(rs.getString("place"));
                    return person;
                }
            }
        }

        return null;
    }
}

You can use your own approach to get a new Connection. In the above code I have created a static utility that returns me the same connection.

Creating Interceptors

Creating Interceptors involves 2 steps:

  1. Create Interceptor binding which creates an annotation annotated with @InterceptorBinding that is used to bind the interceptor code and the target code which needs to be intercepted.
  2. Create a class annotated with @Interceptor which contains the interceptor code. It would contain methods annotated with @AroundInvoke, different lifecycle annotations, @AroundTimeout and others.

Lets create an Interceptor binding by name @LatencyLogger

package demo;

import java.lang.annotation.Target;
import java.lang.annotation.Retention;
import static java.lang.annotation.RetentionPolicy.*;
import static java.lang.annotation.ElementType.*;
import javax.interceptor.InterceptorBinding;

@InterceptorBinding
@Retention(RUNTIME)
@Target({METHOD, TYPE})
public @interface LatencyLogger {
    
}

Now we need to create the Interceptor code which is annotated with @Interceptor and also annotated with the Interceptor binding we created above i.e @LatencyLogger:

package demo;
import java.io.Serializable;
import javax.interceptor.AroundInvoke;
import javax.interceptor.Interceptor;
import javax.interceptor.InvocationContext;

@Interceptor
@LatencyLogger
public class LatencyLoggerInterceptor implements Serializable{
  
  @AroundInvoke
    public Object computeLatency(InvocationContext invocationCtx) throws Exception{
        long startTime = System.currentTimeMillis();
        //execute the intercepted method and store the return value
        Object returnValue = invocationCtx.proceed();
        long endTime = System.currentTimeMillis();
        System.out.println("Latency of " + invocationCtx.getMethod().getName() +": " + (endTime-startTime)+"ms");
        return returnValue;
        
    }
}

There are two interesting things in the above code:

  1. use of @AroundInvoke
  2. parameter of type InvocationContext passed to the method

@AroundInvoke designates the method as an interceptor method. An Interceptor class can have only ONE method annotated with this annotation. When ever a target method is intercepted, its context is passed to the interceptor. Using the InvocationContext one can get the method details, the parameters passed to the method.

We need to declare the above Interceptor in the WEB-INF/beans.xml file

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
       bean-discovery-mode="all">
    
    <interceptors>
        <class>demo.LatencyLoggerInterceptor</class>
    </interceptors>
</beans>

Creating Service APIs annotated with Interceptors

We have already created the Interceptor binding and the interceptor which gets executed. Now lets create the Service APIs and then annotate them with the Interceptor binding

/*
 * To change this license header, choose License Headers in Project Properties.
 * To change this template file, choose Tools | Templates
 * and open the template in the editor.
 */
package demo.service;

import demo.LatencyLogger;
import demo.dao.PeopleDAO;
import demo.model.Person;
import java.sql.SQLException;
import java.util.List;
import javax.inject.Inject;

public class PeopleService {

  @Inject
  PeopleDAO peopleDAO;

  @LatencyLogger
  public List<Person> getAllPeople() throws SQLException {
    return peopleDAO.getAllPeople();
  }

  @LatencyLogger
  public Person getPerson(String id) throws SQLException {
    return peopleDAO.getPerson(id);
  }

}

We have annotated the service methods with the Interceptor binding @LatencyLogger. The other way would be to annotate at the class level which would then apply the annotation to all the methods of the class. Another thing to notice is the @Inject annotation that injects the instance i.e injects the dependency into the class.

Next is to wire up the Controller and View to show the data. The controller is the servlet and view is a plain JSP using JSTL tags.

/*
 * To change this license header, choose License Headers in Project Properties.
 * To change this template file, choose Tools | Templates
 * and open the template in the editor.
 */
package demo;

import demo.model.Person;
import demo.service.PeopleService;
import java.io.IOException;
import java.sql.SQLException;
import java.util.List;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.inject.Inject;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

@WebServlet(name = "AOPDemo", urlPatterns = {"/AOPDemo"})
public class AOPDemoServlet extends HttpServlet {

  @Inject
  PeopleService peopleService;

  @Override
  public void doGet(HttpServletRequest request, HttpServletResponse response)
          throws ServletException, IOException {
    try {
      List<Person> people = peopleService.getAllPeople();
      Person person = peopleService.getPerson("2");
      request.setAttribute("people", people);
      request.setAttribute("person", person);
      getServletContext().getRequestDispatcher("/index.jsp").forward(request, response);
    } catch (SQLException ex) {
      Logger.getLogger(AOPDemoServlet.class.getName()).log(Level.SEVERE, null, ex);
    }
  }
}

The above servlet is available at http://localhost:8080//AOPDemo. It fetches the data and redirects to the view to display the same. Note that the Service has also been injected using @Inject annotation. If the dependencies are not injected and instead created using new then the Interceptors will not work. This is an important point which I realised while building this sample.

The JSP to render the data would be

<%@page contentType="text/html" pageEncoding="UTF-8"%>
<%@ taglib prefix="c" 
           uri="http://java.sun.com/jsp/jstl/core" %>
<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <title>AOP Demo</title>
  </head>
  <body>
    <h1>Hello World!</h1>
    <table>
      <tr>
        <th>Id</th>
        <th>Name</th>
        <th>Place</th>
      </tr>
      <c:forEach items="${requestScope.people}" var="person">
        <tr>
          <td><c:out value="${person.id}"/></td>
          <td><c:out value="${person.name}"/></td>
          <td><c:out value="${person.place}"/></td>
        </tr>
      </c:forEach>
    </table>
    <br/>
    Details for person with id=2
    <c:out value="Name ${person.name} from ${person.place}" />
  </body>
</html>

With this you would have built a very simple app using Interceptors. Thanks for reading and staying with me till this end. Please share your queries/feedback as comments. And also share this article among your friends :)

Book Review: Java Performance by Charlie Hunt and Binu John

If you want to:
– learn about commands used for OS monitoring
– understand about different components of JVM
– monitor and tune JVM to improve its performance.

then, Java Performance is the book you should be picking. This book covers :
– command line tools for OS monitoring.
– JVM overview.
– tools used for monitoring Java applications.
– tuning JVM where majority of it related to GC tuning.
– tuning Java EE applications.

The book has 12 Chapters and not all chapters are required to be read in one go. Its recommended to read all the chapters until chapter 7 as they cover some really interesting and important topics relevant to OS monitoring, JVM basics, JVM monitoring, JVM tuning.

The chapters covering Web Services performance rely on SOAP based services and might not be really relevant to few readers. Also Glassfish is used as the app server for examples in Java EE related monitoring chapters. These few chapters (chapters 8-12) can be read as and when the need arises.

Though the book has been published in 2011, it covers the latest GC algorithm – G1 Algorithm and also calls out any changes/optimisations that can be done in Java 7. So one cannot rule out this book as outdated.

I really benefitted from this book- the chapters related to OS monitoring, JVM overview, JVM monitoring, JVM tuning are the best, resourceful and highly informative. Also the step-by-step approach to JVM tuning described in the book helps a lot in tuning your Java applications.

Anyone reading this book should have good understanding of Java programming and also have at his disposal a Java application which they can tune as they read through the chapters. Mere reading will not be helpful.

Buy the print book from Flipkart.