Uncategorized

Branch Prediction is Real

Branch predictions exist, and they do have a major impact on performance. Here I will show some useless code just to demonstrate how do they effect your performance. Two “algorithms” scanning the entire array O(n) and do some counting, however, due to branch prediction the nature of elements in the array will make a huge difference on performance.

  1. Sorted Array
  2. int initialSize = 10_000;
    long before,after;
    for(int j=1;j<1000;j+=100) {
      int size = initialSize*j;
      int[] array = IntStream.range(0, size).toArray();
    
       int count = 0;
       before = System.nanoTime();
       for (int i = 0; i < array.length - 1; i++) {
          if (array[i] < array[i + 1]) {
            count++;
          }
        }
        after = System.nanoTime();
        long elapsed = after - before;
        System.out.println(elapsed+","+size+","+count);
    }
    
    256933,10000,9999
    6509775,1010000,1009999
    2504190,2010000,2009999
    2378569,3010000,3009999
    3104675,4010000,4009999
    3909848,5010000,5009999
    5470804,6010000,6009999
    5878946,7010000,7009999
    6360480,8010000,8009999
    6848432,9010000,9009999
    
  3. Randomised Array
  4. Random random = new Random();
    int initialSize = 10_000;
    long before,after;
    for(int j=1;j<1000;j+=100) {
       int size = initialSize*j;
       int[] array =  random.ints(size).toArray();
       int count = 0;
       before = System.nanoTime();
       for (int i = 0; i < array.length - 1; i++) {
          if (array[i] < array[i + 1]) {
            count++;
          }
        }
        after = System.nanoTime();
        long elapsed = after - before;
        System.out.println(elapsed+","+size+","+count);
    }
    

    output on my machine:

    298587  ,10000,4970
    11325883,1010000,505161
    9414624 ,2010000,1004060
    13196647,3010000,1504505
    17669974,4010000,2005382
    22051148,5010000,2504787
    26793498,6010000,3004422
    30835706,7010000,3504727
    36048498,8010000,4004003
    45621428,9010000,4505365
    

I do need to print count otherwise the compiler will play it smart and never execute the loop. Remember this.
Here’s a graph that shows the difference more clearly.
time_plotting

Now. Both programs scan the entire array and do the same check, so theoretically they have the same asymptotic growth. However, in the case of the sorted array, the processor have noticed that the condition array[i] < array[i + 1] was true in the previous instruction/s and optimised accordingly.

Advertisements

Clear Optimisations

Life without a JITter is almost impossible. A lot of people tend to think that compiler optimisations are a myths and you shouldn’t really bother if the language you using is compiled or fully interpreted. Just have a look at this example to make sure that optimisations are real.

int even = 0;
int odd = 0;
long before;
long after;
before  = System.currentTimeMillis();

for (int i = 0; i < 10_000_000_00; i++) {
    if ((i & 0x1) != 0)
        even++;
     else
        odd++;
}
after = System.currentTimeMillis();
long t1 = (after-before);
System.out.println("t1="+t1); // ~10 ms on my machine

The reason why this ran super fast is because the compiler noticed that the computation within the loop is not needed in the program and hence it decided to skip the loop. Just prove it, use the computation and see the difference.

int even = 0;
int odd = 0;
long before;
long after;
before  = System.currentTimeMillis();

for (int i = 0; i < 10_000_000_00; i++) {
    if ((i & 0x1) != 0)
        even++;
     else
        odd++;
}
after = System.currentTimeMillis();
long t1 = (after-before);
System.out.println("t1="+t1); // ~1023 ms on my machine
System.out.println(even);
System.out.println(odd);

As you can see, it takes much more to execute (10x order of magnitude) simply because it is actually doing the computation.

Favour Parallel Streams for Simple cases

The Getting Started Tutorial for Akka actors is a parallel solution for getting Pi approximation. Basically you have a master actor that hands parts of the operation to worker actors in a Round Robin fashion and aggregates the result to form the final approximation.

All great, all brilliant.

I did some basic measurements on my machine (2.3 GHz Intel Core i7).
Running the example 4 times Using actors.

   main(){
   
   int nrOfElements = 10000;
   int workers = 4;
   calculateUsingActors(workers, nrOfElements, 10000);
   calculateUsingActors(workers, nrOfElements, 10000);
   calculateUsingActors(workers, nrOfElements, 10000);
   calculateUsingActors(workers, nrOfElements, 10000);
   }
   void calculateUsingActors(final int nrOfWorkers, final int nrOfElements, final int nrOfMessages) {
    ActorSystem system = ActorSystem.create("PiSystem");
    final ActorRef listener = system.actorOf(new Props(Listener.class), "listener");
    ActorRef master = system.actorOf(new Props(() -> {
            return new Master(nrOfWorkers, nrOfMessages, nrOfElements, listener);
        }), "master");

     master.tell(new Calculate());

   }

Results:

3.1415926435897883  894 milliseconds
3.1415926435897883  956 milliseconds
3.1415926435897883  921 milliseconds
3.1415926435897883  844 milliseconds

I implemented the same algorithm using parallel streams in a single Java expression and achieved better performance

double calculateUsingParallelStream(int nrOfElements){
  double pi = IntStream.range(0, nrOfElements)
              .parallel()
              .mapToDouble(c -> calculatePiFor(c, nrOfElements))
              .reduce((c, c1) -> c + c1)
              .getAsDouble();
  return pi;
}

double calculatePiFor(int start, int nrOfElements) {
  double acc = 0.0;
   for (int i = start * nrOfElements; i <= ((start + 1) * nrOfElements - 1); i++) {
      acc += 4.0 * (1 - (i % 2) * 2) / (2 * i + 1);
   }
   return acc;
}

Results:

3.141592643589785  235 milliseconds
3.141592643589785  122 milliseconds
3.141592643589785  119 milliseconds
3.141592643589785  123 milliseconds

Arrays, The Broken Parts!!

In the early days of Java and C#, both languages didn’t have generics and hence type parameters were unbound to any type and this resulted in an infinite number of class cast exceptions in our code, programmers were responsible for type checking although languages with type checkers claim to insure type safety at compile time.
Generics were introduced but arrays (the most primitive collection), stay broken. Simple example

String strings[] = {"Broken","Type", "system"};
Object objects[] = strings;
objects[0] = 69; // compiles fine, but throws ArrayStoreException at runtime

So what’s the problem here?
The problem is that arrays are covariant. This fancy word means that String[] is a subtype of Object[].

IMG_0052
Covariant types are not bad as long as they are restricted to taking things out of the collection.

String strings[] = {"Broken","Type", "System"};
Object objects[] = strings;
Object obj =  objects[0] ; //safe

Contravariant types on the other hand are only safe for putting things in.

IMG_0053

PECS Producer extends Consumer super is a simple rule provided by Joshua Bloch to simplify type variance

Simply put:

Set<? extends T> => Covariant
Set<? super T> =>  Contravariant 
Set<T> => Invariant

Invariant is neither  covariant nor contravariant, the exact type must match, neither the sub nor the super type are allowed

C# has better constructs for type variance, instead of using misleading keywords like extends and super, it uses in and out.

Scala on the other hand and unlike C# that copied some mistakes blindly from Java got it right. Arrays in Scala are invariant,

val a:Array[Any] = Array[String]("Better","Type","System") // does not compile

If you using a typed language, it’s very important that you understand the type variance so you don’t keep fighting with the complier.

Concurrency vs Threading vs Parallel Processing vs Asynchrony

Lets clear up the terminology here, as the buzzy terms are being used more often and we expect to hear more about them in the future. Concurrency, Multi-threading, Parallel Processing, and Asynchrony.

Concurrency: Doing more than a thing at the same time, this doesn’t mean multi-threading, multi-threading is a way for achieving concurrency but it’s not the only way.
Multithreading: Using more than one thread. Threads usually refer to OS threads and these are really expensive and you shouldn’t be using them directly in your code. In C#, feel free to go and delete every Thread you have. It’s only acceptable to use threads if you calling Sleep with the argument zero.

  Thread.Sleep(0);
 

This call to Sleep is very different from any call to Sleep with non zero arguments, it means relinquish control to any thread of equal priority that is ready to run; if no such thread exists, do not relinquish control. Multi-threading is hard to get right especially if all what you know about threading is creating an instance of the class Thread, and hence you should use a higher abstraction that does pooling and thread caching for you.
Parallel Processing: Dividing a unit of work among multiple threads that run in parallel. That is, dividing the work into multi-threads to maximise the use of multi processors where these threads can run on different cores independently.
Asynchrony: Is an aspect of concurrency where Futures or Callbacks could be used to avoid creating heavy OS threads. A promise represents an operation that will complete in the future, and this doesn’t mean that a thread will be created for this operation.

Java 8 Lambdas are not Just Syntactic Sugars

One might think that lambdas in Java 8 are just syntactic sugars for anonymous classes and hence theses two code snippets are equivalent

   Predicate<String> lengthGreaterThan2 = c -> c.length() > 2; // lambda
   Predicate<String> lengthGreaterThan2 = new Predicate<String>(){ // anonymous implementation
         @Override
         public boolean test(String input) {
               return input.length() > 2;
         }
   }

These declarations are not actually equivalent. In order to see this fact we need to dig a bit into the generated bytecode.
Say we have this functional interface

interface Bar{
   String sayHi(String input);	
}

Which will be used in the class Foo in two different ways

public class Foo{
	public void fn(final String input){

		Bar bar = new Bar(){
			@Override
			public String sayHi(String name){
				return "Hi " + name;
			}
		};

		String result = bar.sayHi(input);
		System.out.println(result);
	}
}

When we compile Foo.java we will get the following .class files generated “Foo.class” and “Foo$1.class”, Foo$1.class refers to the Bar which was implemented anonymously in the method fn. And that’s the generated bytecode.

public void fn(java.lang.String);
    descriptor: (Ljava/lang/String;)V
    flags: ACC_PUBLIC
    Code:
      stack=3, locals=4, args_size=2
         0: new           #2                  // class Foo$1
         3: dup           
         4: aload_0       
         5: invokespecial #3                  // Method Foo$1.&amp;quot;&amp;lt;init&amp;gt;&amp;quot;:(LFoo;)V
         8: astore_2      
         9: aload_2       
        10: aload_1       
        11: invokeinterface #4,  2            // InterfaceMethod Bar.sayHi:(Ljava/lang/String;)Ljava/lang/String;
        16: astore_3      
        17: getstatic     #5                  // Field java/lang/System.out:Ljava/io/PrintStream;
        20: aload_3       
        21: invokevirtual #6                  // Method java/io/PrintStream.println:(Ljava/lang/String;)V
        24: return        
      LineNumberTable:
        line 7: 0
        line 14: 9
        line 15: 17
        line 16: 24
} 

There is nothing new in the generated bytecode, because this how anonymous classes are implemented historically, which is not the case with lambdas however. Let me change the way I use Bar and make it Java 8 idiomatic.

public void fn(final String input){
		Bar bar = c-> "Hi " + c; 
		String result =  bar.sayHi(input);
		System.out.println(result);
	}

This time no Foo$1.class will be generated and Foo.java will compile to a single Foo.class file, but with a different bytecode

 public void fn(java.lang.String);
    descriptor: (Ljava/lang/String;)V
    flags: ACC_PUBLIC
    Code:
      stack=2, locals=4, args_size=2
         0: invokedynamic #2,  0              // InvokeDynamic #0:sayHi:()LBar;
         5: astore_2      
         6: aload_2       
         7: aload_1       
         8: invokeinterface #3,  2            // InterfaceMethod Bar.sayHi:(Ljava/lang/String;)Ljava/lang/String;
        13: astore_3      
        14: getstatic     #4                  // Field java/lang/System.out:Ljava/io/PrintStream;
        17: aload_3       
        18: invokevirtual #5                  // Method java/io/PrintStream.println:(Ljava/lang/String;)V
        21: return        
      LineNumberTable:
        line 5: 5
        line 6: 6
        line 7: 14
        line 8: 21
} 

As you see from the generated bytecode at line:0 Bar.sayHi is being invoked using InvokeDynamic and no additional objects were instantiated as weve seen in the version that uses anonymous classes

InvokeDynamic reduces the memory overhead because no objects will be instantiated on the fly just for the sake of anonymous implementation. For those who call everything a “syntactic sugar”, lambdas and anonymous classes aren’t equivalent.

Declarative Programming in Java 8

Java is an object oriented language that treats objects as first class citizens. Objects are the basic tools for building applications in Java. Ten years ago, OOP was a religion, now it is just a tool, and we should be able to add more tools to our toolset, because it is not a religion anymore! Declarative programming is one of these powerful tools that we can add to our toolset to construct efficient, less verbose, and concurrent friendly code.
It’s often hard to tell whether a language is fully functional or not, however, it’s more possible to tell to what extent a language supports functional paradigms. Is it possible to store a reference to a function? Is it possible to pass functions as parameters? Is it possible for a function to return another function? The answer is NO because functions were always considered as second class citizens in Java.
Surprisingly, Javascript which was initially developed to validate web forms does support these functional paradigms.

//store a reference to a function in a variable
var zero = function(){
  return 0;
};
// passing functions as parameters
function fn(callback){ // callback is a function
callback(); //calling callback
}
// a function that returns a function
var v = function(){
  var inner = function(){
  return 1;
  };
  return inner;
}

Fortunately, these features have been added to Java 8. Finally, Java supports lambda expressions and method references.

Method references

static int fn(){  
        return -1;
}
Supplier<Integer> supplier = Main::fn;
System.out.println(supplier.get());

supplier is just a variable that holds a reference to a function that returns an integer. fn will only get executed when the get() function get called.

System.out.println(supplier.get()); // the actual execution of fn

This helps a lot in writing lazy code. Assume that we have these two functions, the first is expensive and the other is not and the second function relies on a value provided by the expensive one.

int fn(){
System.out.println("Bloody Expensive!!");
return 0;
}
int gn(int a, int b){
 if(a<10){
  return a;
 }else{
  return a + b;
 }
}

// calling gn using fn result
System.out.println(gn(4,fn()));

Quite simple I suppose, fn will always be called although it’s only required if the value of a is less than 10.We often count our chicken before they hatch.
So lets fix that by being more lazy, instead of passing the raw value of an integer, we can pass a function that provides an integer.


int gn(int a, Supplier<Integer> supplier){
 if(a<10){
  return a;
 }else{
  return a + supplier.get();
 }
}

System.out.println(gn(4,FooClass::fn)); // passing a reference to fn to act as a supplier

Functional Interfaces
Some functional abstractions have been added to the framework, among them being:
Supplier that supplies a type T by calling T get()
Consumer that consumes a type T by calling void accept(T t)
Function matches a function that takes T as an input and returns R as an output. R apply(T t)

Java still treats everything as a type. Supplier is a type, it is a functional interface. A functional interface is an interface with a single method.

@FunctionalInterface
public interface Supplier<T> {
    T get();
}

Therefore, creating a supplier requires implementing this interface, Java supports anonymous implementation of interfaces. Say we don’t have a function to reference and we want to pass one anonymously

Supplier<Integer> supplier = new Supplier<Integer>() {
            @Override
            public Integer get() {
                return 10; // some magic number
            }
 };
// remember that gn takes a supplier 
System.out.println(gn(4,supplier));
}

This looks brilliant for a Java fan (WOW!! what a language) but it doesn’t look any brilliant for a functional chap. So much types and trivial code.

Supplier<Integer> supplier = return x;  
// Java : NOOO!!! you cant do that, bloody programmer. 
// Programmer: what a stupid compiler, I wanted to say 
//             that get returns x,it's obvious.
//             You know what, I gonna use C#, 
//             it supports lambdas I was told
// Java: Lambdas are not OOP
// Programmer: Fuck OOP
// Java: Hang on, I will add them to my 8th version
// Programmer: Cheers!!
}

And finally, lambda expressions were added to the language. Lambda expressions come from Lambda calculus. In simple words, if we have a simple computable function Sum(a,b) = a + b that takes two integers and it returns their sum,then this can be reworked into an anonymous form using lambda calculus to the following (a,b)->a+b.
In java 8 we can do the same, we can implement functional interfaces anonymously using lambda expressions.

Runnable runnable = new Runnable() {
            @Override
            public void run() {
                System.out.println("something cool");
            }
        };

Can be reworked to the following

Runnable runnable = () -> {System.out.println("something cool");}; 

On Side Note: You should use a higher level abstraction for concurrency.
Lambda expressions are very useful when working with datasets. Streams for instance are very lambda freindly because they use functional interfaces like Predicate and Function

int sum = shapes.stream()
                .filter(s -> s.getColour() == BLUE)
                .mapToInt(s -> s.getWeight())
                .sum();

Dont be shocked, you have to get familiar with this code. As you see, streams are declarative, this code
provides the WHATS rather than the HOWS and it’s the compiler responsibility to figure out the HOWS.
The imperative equivalent could be the following

int sum = 0;
for(Shape shape: shapes){
if(shape.getColour() == BLUE){
  sum += shape.getWeight();
}
}

Apart from verbosity, it’s quite hard to parallelise imperative code. On the other hand, it’s incredibly easy to parallelise such an aggregation using streams.

int sum = shapes.parallelStream()   // magic goes here
                .filter(s -> s.getColour() == BLUE)
                .mapToInt(s -> s.getWeight())
                .sum();

Test this version against a massive dataset to see the huge performance gain.

Streams are declarative and declarative is lazy and nothing beats being lazy.

List<Integer> list = new ArrayList<>(Arrays.asList(1,2,3,4,5));
Stream stream = list.parallelStream().filter(c -> c % 2 == 0);
list.add(6);
System.out.println(stream.count());

Despite the fact that the 3rd even number was added after filtering even numbers, the output of this programme
is 3. Why? Because streams are lazy. The actual filtering is only get executed when required, and that is calling count which is a terminal expression does require an execution. A stream is an abstraction that takes our WHATS and works the HOWS out if and only if thats required.

We’ve recently mentioned the abstraction Function , which is an abstraction that matches a function that takes T and returns R. For example

 Function<Integer, Integer> function = (c) -> c + 1;
 // take an integer and top it up by a 1
 System.out.println(function.apply(5)); // apply it to 5

A practical example for this abstraction could be a memoiser. A memoiser that memoises calls for pure function.
In functional programming, purity means getting the same output out of the same input. Math.sqrt is a pure function, because every time you call Math.sqrt(4) you expect a 2. Calls for expensive pure functions could cached using simple memoisers.

class Memoiser {
    private final Map<Integer, Integer> cache = new HashMap<>();
    // pure expensive function
    int expensive(int x) {
        System.out.println("expensive");
        return x + 1;
    }

    void fn(int x) {
        if (cache.containsKey(x)) {
            System.out.println(cache.get(x));
        } else {
            cache.put(x, expensive(x));
        }
    }
}

Quite simple, expensive is pure and heavy, hence, it could be easily cached. If you look back at our memoiser, you might notice that it is not very abstract, it only works for expensive(int). Lets add more flexibility to it , instead of using expensive(int) directly, we can pass a higher order function that takes an integer and returns an integer.

class Memoiser{

    private final Function<Integer,Integer> higherOrderFunction;
    private final Map<Integer, Integer> cache = new HashMap<>();

    Memoiser(Function<Integer,Integer> function){
        this.higherOrderFunction = function;
    }

    void fn(int x) {
        if (cache.containsKey(x)) {
            System.out.println(cache.get(x));
        } else {
            cache.put(x, higherOrderFunction.apply(x));
        }
    }

}    

Hot Curries!!
All the functions we’ve seen so far take a single argument or none, the abstraction Function for instance matches a function that takes a single parameter. Cool, what about functions that take more than single parameter? Sum(a,b) for instance.

The equivalent of a Function in C# is Func. However, .Net provides a massive number of overloads for the type Func


Func<TResult> //matches a method that takes no arguments, and returns value of type TResult.
Func<T, TResult>//matches a method that takes an argument of type T, and returns value of type TResult.
Func<T1, T2, TResult>//matches a method that takes arguments of type T1 and T2, and returns value of type TResult.
Func<T1, T2, …, TResult>//and so on up to 16 arguments, and returns value of type TResult.

And Therefore the third option could be used in C# to match Sum

Func<int, int, int> sum = (c,c1) => c + c2;
Console.WriteLine(sum(2,3));

These overloads don’t exist in Java, so how we gonna deal with multiparam functions?? The solution is Currying. Haskell Curry, proved that any computed function that takes multi argument can be transformed into a chain of functions each with a single argument. So instead of returning the result immediately, we can return a function that has a full or partial computational knowledge of the result.
(a,b) -> a + b can be reworked to the following curried version (a) -> (a)-> a + b
The curried version returns a function that takes a single argument.

How does this look in Java?

Function<Integer,Function<Integer,Integer>> sum = a -> b -> a + b;
// apply 2 + 3
int result = sum.apply(2).apply(3);
System.out.println(result);

Declarative programming is a very powerful tool that should be used in your daily hacks, the days were imperative programming was the only methodology have gone. If you can’t see the point of using lambdas and functional stuff then you still a BLUB programmer. Blub programmers tend to see individual language features that they don’t understand as “weird”.
Happy hacking!!!!

From SQL Joins to Arrays in PostgreSQL

Joins are in the heart of any relational database. Obviously, we use relations to model relationships between tables in SQL, and joins on different flavours (Inner, left, or right) are the semantics for manipulating a relational model. Well, I am not saying joins are bad, however, I believe that they are horrendously abused. If your schema requires a lot of joins in every SQL query to retrieve things, then consider changing it now.

We have a schema for modelling a tree structure, a very typical one.a table called NODE and a bridge table called NODE_RELATION that holds parent node id and child node id . A typical recursive relationship.
Unfortunately, these two tables are massive and queried quite a lot. Querying the bridge table was eating system’s resources and my responsibility was to enhance it.

Well, may be my solution was not SQL acceptable, but I think it is fine, the whole concept of NoSQL databases and CAP theory is to relinquish some ACID properties. Ugly solutions are acceptable if the intent is to improve performance.

The DB we talking about is under PostgreSQL, a durable and free ACID database. Unlike most SQL databases, PostgreSQL supports arrays as columns type with a comprehensive semantics for common functionality (push element,remove element,aggregation, etc..). In simple words, my solution was to store foreign ids in an array.We had to ensure backward compatibility, the bridge table was used everywhere in the code, so deleting it and using only arrays is not an option.
Let’s go back to our schema.

CREATE TABLE node -- ddl for the node table
(
  id integer NOT NULL DEFAULT nextval('node_seq'::regclass),
  name text,
  CONSTRAINT node_pkey PRIMARY KEY (id)
);

CREATE TABLE node_relation -- ddl for the bridge table
(
  id integer NOT NULL DEFAULT nextval('node_parent_child'::regclass),
  parent integer, -- node id
  child integer, -- node id
  CONSTRAINT node_relation_pkey PRIMARY KEY (id)
);

Quite simple I reckon. Well, let’s go to the hack then. Firstly, we need to add the arrays to the node table

ALTER TABLE "node" ADD COLUMN children integer[];
ALTER TABLE "node" ADD COLUMN parents integer[];

The arrays are empty so far, so we have to seed them from a the bridge table.In other words, we need the node’s parents and children.

UPDATE "node" SET children = v.children
FROM(
SELECT node_relation.parent as parent,array_agg(child) AS children FROM node_relation
GROUP BY node_relation.parent
)v
WHERE "node".id = v.parent;

array_agg is the stash here, a function that aggregates some input and convert them into an array.
So far so good, we have managed to add seed our table, but what about new relations that will be inserted or deleted from the bridge table. We need a simple and a quick solution. Nasty solutions are fine sometimes in some circumstances. So , I’ve decided to hire a trigger that watches the insertion and deletion of records on node_relation table and append elements or remove them from the array accordingly. Here we go!

CREATE OR REPLACE FUNCTION sync_node_and_node_relation() RETURNS TRIGGER AS $sync_node_node_relation$
 BEGIN
  IF (TG_OP = 'INSERT') THEN
    -- add new element to children's array
    UPDATE "node" SET children = array_append("node".children,NEW.child)
    WHERE "node".id = NEW.parent;

    -- add it to its parent	
    UPDATE "node" SET parents = array_append("node".parents,NEW.parent)
    WHERE "node".id = NEW.child;
    
    RETURN NEW;
 ELSEIF (TG_OP = 'DELETE') THEN
   -- remove ids from the arrays
   UPDATE "node" SET children = array_remove("node".children,OLD.child)
   WHERE "node".id = OLD.parent;
   
   UPDATE "node" SET parents = array_remove("node".parents,OLD.parent)
   WHERE "node".id = OLD.child;
   	
  RETURN OLD;
  END IF;
 END;
 
$sync_node_node_relation$ LANGUAGE plpgsql;  
DROP TRIGGER IF EXISTS sync_node_node_relation ON node_relation;

CREATE TRIGGER sync_node_node_relation AFTER INSERT OR DELETE ON node_relation
FOR EACH ROW EXECUTE PROCEDURE sync_node_and_node_relation();

This was it!! After we started using the arrays for fetching nodes’ parents and children, we’ve noticed a massive performance gains. Following the rules is usually good, but hacking is playing for cleverness.
Enjoy your day!!

Building a simple IoC Container for Java using AspectJ

Having classes,fields, and methods in your code does not mean that you are applying object oriented principles. Classes, fields, and methods are just structures that we use to achieve some level of abstraction and cohesion.

 class Adder{
 
  void add(int x,int y){
   int sum = x + y;
   DatabaseWriter writer = new DatabaseWriter();
   writer.writeToDB(sum);
 }
}

Despite the fact that it is just an adder, this code does not work if the database is not available, simply because it is dependant on some database features. So basically, this code is rigid, untestable, tightly coupled, and it is just awful.

Cool,lets hack it and make it better, more abstract, and less dependant.

interface Writer{
 void write(Object object);
}
DatabaseWriter implements Writer{
 void write(Object object){
  ...
 }
} 

And now, instead of using DatabaseWriter directly in the Adder, we can inject it and use the Writer object which is more abstract.

 class Adder{
  private Writer writer;
  
  void add(int x,int y){
   int sum = x + y;
   writer.write(sum);
 }
}

So far so good, the Adder looks alright now, but we still missing something, how we gonna tell Adder to use DatabaseWriter or any Writer implementation. The answer is: Dependency Injection.
Dependency Injection is a pattern that implements inversion of control. A common way for doing DI in java is to use Spring DI which is brilliant. But what stops from building our own inversion of control container, it is fun ,isnt it? Our IoC container is called simpleDI and it is on GitHub, Go and fork it

Well, Let’s get started.

public interface IFoo {
    String fn();
}

public class Bar implements IFoo{

    @Override
    public String fn() {
        return "Bar";
    }
}
@Service
public class SimpleService {

    @Injectable
    private IFoo foo;
}

SimpleService is a service that has a dependency on some implementation of IFoo. SimpleService is annotated by @Service and IFoo by @Injectable and this how we can tell simpleDI that SimpleService is a service that has an injectable dependency on IFoo.Alright, now we need to tell simpleDI that Bar is the concrete implementation of IFoo.

DIBinding fooDIBinding = new DIBinding.BindingBuilder(IFoo.class).to(Bar.class).build();
DIContext.INSTANCE.addBinding(fooDIBinding);
DIContext.INSTANCE.injectDependencies();

Whenever an instance of a type that is annotated by @Service get instantiated, our DI has to inject its dependency based on a binding we specify.Hmmm, in plain English, we need to hijack constructor calls.
AspectJ is a very powerful tool for Aspect Oriented Programming in Java,and here we go

@Aspect
public class InjectionAspect{

    @Pointcut("execution ( (@simpleDI.annotations.Service *).new(..))")
    public void serviceInitPointcut(){}

    @Before("serviceInitPointcut()")
    public void serviceInitAdvice(JoinPoint joinPoint) throws InstantiationException, NoSuchMethodException, InvocationTargetException, IllegalAccessException {
        Object target =  joinPoint.getTarget();
        DIContext.INSTANCE.initService(target);
    }

}

Our Aspect contains a simple advice that catches service’s constructor calls and its dependencies get injected accordingly.
Now we can iterate over declared fields in the service and instantiate them.

   public Object initService(Object service) throws NoSuchMethodException, InstantiationException, IllegalAccessException, InvocationTargetException {
        if(!service.getClass().isAnnotationPresent(Service.class)){
            throw new IllegalArgumentException("The passed object is not a service, use "+Service.class.getName() +" annotation");
        }
        for(Field field: service.getClass().getDeclaredFields()){
            if(field.isAnnotationPresent(Injectable.class)){
                recursiveFieldBinding(service, field);
            }
        }

        return service;
    }

    private void recursiveFieldBinding(Object root, Field field) throws InvocationTargetException, NoSuchMethodException, InstantiationException, IllegalAccessException {
        Class fieldType = field.getType();
        field.setAccessible(true);
        DIBinding binding = DIContext.INSTANCE.getBindingForType(fieldType);
        InstantiationMethod method = binding.getInstantiationMethod();
        Object instance = method.getInstance(binding.getBindingType());

        field.set(root,instance);

        for(Field innerField: fieldType.getDeclaredFields()){
            if(innerField.isAnnotationPresent(Injectable.class)){
                this.recursiveFieldBinding(instance, innerField);
            }
        }


    }

Fields are types and types might be injectables, and therefore, so we have to inject their dependencies, this why we had to bind fields recursively.Well, our recursive binding is a bit vulnerable, we will definitely get a stack overflow if the service contains cyclic dependencies. So, Let’s check for cycles then.

public void checkForCycles() throws CyclicDependencyException {
        for(DIBinding binding: bindingSet){
            checkForCycles(binding.getBindingType(), new HashSet<Class>());
        }
    }

    void checkForCycles(Class clazz, HashSet<Class> visited) throws CyclicDependencyException {
        if(visited.contains(clazz)){
            throw new CyclicDependencyException(String.format("Cyclic Dependency Detected: %s", clazz));
        }
        visited.add(clazz);
        for(Field innerField: clazz.getDeclaredFields()){
            if(innerField.isAnnotationPresent(Injectable.class)){
                this.checkForCycles(innerField.getType(), visited);
            }
        }
    }

Brilliant, our tiny framework is ready now.So let’s see how flexible our simpleDI is.

  • We can bind abstract types to concrete ones
               DIBinding fooDIBinding = new DIBinding.BindingBuilder(IFoo.class).to(Bar.class).build();
               
  • We can bind concrete types to themselves
               DIBinding barDIBinding = new DIBinding.BindingBuilder(Bar.class).toSelf().build();
               
  • Be default,void constructors are used to instantiate injectables, but we can easily change this behavior and make it an object factory instead

And here we go, we’ve implemented our very basic IoC container. Feel free to fork it on GitHub and hack it if you want.