New geeky acronym for programmers: CTMMC

Programmers, a new acronym has been created for us: CTMMC™.

Yeah, I know, it doesn’t look that appealing. Until you see what it means. See it in this article, written by Lukas Eder, CEO of Data Geekery GmbH and creator of JOOQ.

OK, the story is as follows. Lukas wrote this article titled The Code That Made Me Cry, which I found and read at DZone. I immediately noticed that there was something sticky in that article’s name. The Code That Made Me Cry. So nice to pronounce.

Also, the article dealt with a topic that us programmers face everyday, but at the same time enjoy facing, because of what it leads to: shaming another programmers on their code’s bad quality and faults. Oh we love to do that, don’t we? See this frequent blaming in action during a code review session.

So I liked the article, and made this comment:

Lukas responded:

I didn’t think he liked the idea that much, until one week later he comes with one more response:

Here’s the link he is directing me towards: http://ctmmc.net.

Now I was amazed. He even created a website for us to share and discuss on pieces of CTM(Us)C? This idea is even better!!! He collected several examples, put them publicly on the website, and now he is asking us to contribute by submitting those everyday, WTF-deserving pieces of code that we could shame on. I accepted the challenge and contributed with some code of my own (shame on me). I also skimmed some items and particularly enjoyed 17, 24, 27 (this one led me to an interesting finding), 32 and 36.

I have to say, I find the spirit of this idea fascinating. You find a piece of code you MUST blame -never mind, it probably deserves it-, prepare your sharp and sarcastic explanation of why that code made you cry, put a title to it and submit it. That’s it. Practicing this a lot of times might help you sharpening your programmer’s skills as well as your sense of humor. Huge virtues these two. Furthermore, in that simple way we can make another programmer laugh… and be more careful too, since no one wants their code posted as a CTM(Someone)C, right? Well, I woudn’t mind. It teaches me anyway.

Create Your Own Extensible Apache Ant Framework

The technology I’m going to talk about here might be a little old fashioned, but I think I’ll be presenting something interesting to some people, specially those starting to automate their projects’ usual tasks, like compiling, testing, deploying, etc; it seems like everyone starting to do this chooses Apache Ant as their main tool, and that tool is exactly what I use in this article.

I’m going to show you a way to create an extensible framework for defining and executing chained Ant tasks. A full version of the framework can be downloaded here.

A typical scenario for the framework is the following: you need to execute a sequence of operations in order to deploy your project, say download it from a VCS, compile it, test it, generate documentation, back up the database, package everything and deploy it. You want to do all this by typing a simple command that triggers the sequence. For instance you want to say: execute deploy. And that’s it. The complete chain of operations up to deploying your app is executed, one after another.

Well, we are going to achieve this using Ant as our main tool.

RATIONALE

First, we need to notice that these tasks are not isolated, but they depend on each other. You can’t start compiling your code before you have downloaded it from your VCS. So there is a dependency among these tasks that can be pre-elaborated. Ant allows us to establish dependencies among tasks easily. Some tasks might be isolated (no dependencies on other tasks), but in general tasks depend on each other.

The second thing we must be aware of is that, if we are going to create a framework, we must try to cope with (closely) all the cases that might present to us. I mean, you can’t expect everyone will download their source code from a Git repo, because there are some people still using Subversion. Or compile their project with javac, because there are other compilers and languages. So we must create a framework that accepts all kind of tasks. But people do perform general tasks, like ‘Download from a VCS’, or ‘Compile code’, or ‘Backup the database’ either it is PostgreSQL or MongoDB. So these tasks can be at least grouped in categories.

The third thing to be concerned of is that people do not need to execute ALL the tasks in one run, or at all. There are people who don’t do tests. Or apps that don’t need a data backend. So there must be a way to define exactly what tasks you want to run.

Up to here we need three things:

  • Create task categories, and specific tasks inside these categories, like: category ‘Backup the database’ conveys tasks ‘Dump PostgreSQL database’ or ‘Backup SQLite database’; or ‘Deploy app’ means ‘Upload to Http server’ or ‘Transfer to FTP server’.
  • Define the tasks dependencies. Or better, the dependencies between the tasks categories, like: ‘Compile code’ goes after ‘Download from a VCS’.
  • Define exactly what tasks we want to execute out of the entire universe of specific tasks, like: I want to ‘Export code from Github’, ‘Compile code with gcc’ and ‘Deploy app to Http server’ (no testing, really?).

So, let’s get our hands dirty.

TASKS AND CATEGORIES

Our categories and their tasks are going to be established based on directories. I’m going to refer to the base directory as ‘./’. We are going to have a directory for tasks called ‘./tasks/’. So, if we have the category ‘compile’ and the task ‘compile-javac’ (yes, let’s start to use more robotic names; computers like them :-) ), then we’ll have a directory like ‘./tasks/compile/compile-javac/’.

Each specific task will define its own way of executing itself inside a build.xml file that defines the task, and is located in the task’s own directory. So we will have a file like ./’tasks/compile/compile-javac/build.xml’. 

For now, let’s define specific tasks as dummy tasks. They will only print an explanation of what they are supposed to do. Our ./’tasks/deploy/ftp-trasfer/build.xml’ file would look like this:

<?xml version="1.0" encoding="ISO-8859-1"?>
<project name="ftp-transfer" default="default">
<target name="default">
<echo message="doing deploy - ftp transfer"/>
</target>
</project>

The build file for a specific task is a self-contained build file; that is, it is defined as a regular Ant build file with targets that contain all the invocations the task needs to perform its duty, for example, executing pg_dump utility to backup a PostgreSQL database.

DEFINING DEPENDENCIES AMONG CATEGORIES

Now we need to define dependencies among tasks categories. These dependencies will be expressed as regular Ant targets’ dependencies, each target corresponding to each category. 

Dependencies will be defined inside the file ‘./tasks/dependencies.xml’, which is an Ant build file. For now, let’s see an example of what this file could contain:

<?xml version="1.0" encoding="ISO-8859-1"?>
<project name="dependencies" default="do.all"> 

<target name="do.export">
<ant dir="${basedir}/export/${export.specific.task}"
inheritAll="false"/>
</target>

<target name="do.compile"
depends="do.export">
<ant dir="${basedir}/compile/${compile.specific.task}"
inheritAll="false"/>
</target>
</project>

What we do in each target here is invoke a remote target; we invoke the target declared as default in the build.xml file located at a specific task’s directory. For instance, the target ‘do.compile’ executes the default target in the directory ‘./tasks/compile/${compile.specific.task}’, where ‘compile.specific.task’ will be the name of the specific task to execute (i.e. ‘compile-javac’).

DEFINING THE TASKS WE WANT TO EXECUTE

Defining the tasks we want to execute is as simple as loading inside ‘./tasks/dependencies.xml’ a properties file that defines these tasks, and executing each target depending on the condition that a task for its corresponding category has been defined. In Ant it can be done like this:

<?xml version="1.0" encoding="ISO-8859-1"?>
<project name="dependencies" default="do.all">

<!-- Load specific tasks from specific-tasks.properties -->
<dirname property="basedir" file="${ant.file.dependencies}"/>
<property file="${basedir}/specific-tasks.properties"/>

<target name="do.export"
 if="export.specific.task">
<ant dir="${basedir}/export/${export.specific.task}"
inheritAll="false"/>
</target>

<target name="do.compile"
depends="do.export"
 if="compile.specific.task">
<ant dir="${basedir}/compile/${compile.specific.task}"
inheritAll="false"/>
</target>
</project>

Notice how I load the file ‘./tasks/specific-tasks.properties’ and then create if clauses to execute the targets or not. Also, notice that I follow certain conventions for naming the properties that define the specific tasks; this is good practice, but you could name them as you want as long as you check them against the correct name in each target.

An example of a specific-tasks.properties file is like this:

export.specific.task=git-export
compile.specific.task=javac-compile
test.specific.task=junit-test
build.specific.task=custom-build
database.specific.task=postgres-dump
doc.specific.task=javadoc
deploy.specific.task=ftp-transfer

WIRING EVERYTHING TOGETHER: THE FRAMEWORK

All we have now is a bunch of Ant files and targets. Yes, ‘./tasks/dependencies.xml’ has some nice stuff inside, because it is capable of executing sequential specific tasks by iterating through category targets in a chained manner. But how do we trigger the chain?

We only need to invoke the ‘trailing’ category we want to execute. If you want to execute tasks up to testing, you would invoke the ‘test’ category. For this, I created a batch script called ‘invoke.bat’ that contains the following:

@echo off
set script_dir=%~dp0
set script_dir=%script_dir%##
set script_dir=%script_dir:\##=##%
set script_dir=%script_dir:##=%
set ANT_HOME=%script_dir%/3p/ant
set PATH=%PATH%;%ANT_HOME%/bin
set ant_lib_extras=%script_dir%/3p/ant/lib/extras
set category=%1
set target=do.%category%
cd ..
ant -quiet -lib "%ant_lib_extras%" -buildfile tasks/dependencies.xml %target%
pause

NOTE: The framework has Ant bundled within it so that it can be sure that Ant exists and avoid some work to you. It also contains a Linux version of the script.

You can execute the script like this: invoke deploy

The output of running it with the sample specific-tasks.properties file above should be like:

[echo] doing export - git export
[echo] doing compile - javac compile
[echo] doing test - junit test
[echo] doing build - custom build
[echo] doing database - postgres dump
[echo] doing doc - javadoc
[echo] doing deploy - ftp transfer

If you run invoke test, then the output is like:

[echo] doing export - git export
[echo] doing compile - javac compile
[echo] doing test - junit test

EXTENDING THE FRAMEWORK

You can create new tasks inside the categories already defined by just creating a new directory for that task inside the corresponding category folder, and create its ‘build.xml’ file.

To create a new category you should create a folder with the name of the category inside the ‘./tasks’ directory, and the folders and ‘build.xml’ file for each specific task. Then you should create a target for this category in ‘dependencies.xml’, and define or reassemble all the dependencies.

To invoke a new task or category, just edit ‘specific-tasks.properties’ and run the script.

CONCLUSIONS

The framework is based on Ant’s basic features, so it’s very simple. It relies on a pre-elaborated directory structure to define categories and tasks hierarchies; the ‘dependencies.xml’ to define dependencies between tasks categories; and a simple script that manages to trigger the chain of tasks you ask for. You can try more on your own by downloading the framework.

PostgreSQL to SQLite: The journey

A few months ago I wanted to migrate an app to use SQLite as a data backend. In fact, I wanted it to work with both PostgreSQL and SQLite indistinctly (but not at the same time). I wanted to switch between these two databases easily without changing any code. I did it, but along the way I had to solve some problems that might be interesting to many other people.

Many solutions I found were spread across the web, but there was no single place that explained how to completely achieve what I wanted. So, the aim of this post is to try to condense my learning into one article that may be of help to others as a (semi) complete guide. This guide might be useful not only to those creating their own frameworks, but for anyone who doesn’t use any and are willing to try some quirks and tricks to make their app work.

THE BEGINNING

There are many cross-database incompatibilities between PostgreSQL and SQLite, most notably on data types. If you want to have the same code to work for both databases, you better use a framework that manages this for you. But here’s the thing: the framework I use  is created by myself, and didn’t (completely) take these differences into account, since I mainly use PostgreSQL as database; that’s how and why my problems arose. 

My framework conveys many things, but I focus here in the data access part. It uses some JDBC driver to connect to the databases, but it provides more abstract ways to do it; that’s pretty much the data access part of the framework.

A basic DAO class for my framework would look like this:

public class MyDAO extends BaseDAO {
public MyDAO() {
super("context_alias", new DefaultDataMappingStrategy() {
@Override
public Object createResultObject(ResultSet rs) throws SQLException {
MyModel model = (MyModel)ObjectsFactory.getObject("my_model_alias");

model.setStringField(rs.getString("string_field"));
model.setIntegerField(rs.getInt("integer_field"));
model.setBigDecimalField(rs.getBigDecimal("bigdecimal_field"));
model.setDateField(rs.getDate("date_field"));
model.setBooleanField(rs.getBoolean("boolean_field"));

return model;
}
});
}

@Override
public String getTableName() {
return "table_name";
}

@Override
public String getKeyFields() {
return "string_field|integer_field";
}

@Override
protected Map getInsertionMap(Object obj) {
Map map = new HashMap();
MyModel model = (MyModel) obj;
map.put("string_field", model.getStringField());
map.put("integer_field", model.getIntegerField());
map.put("bigdecimal_field", model.getBigDecimalField());
map.put("date_field", model.getDateField());
map.put("boolean_field", model.getBooleanField());
return map;
}

@Override
protected Map getUpdateMap(Object obj) {
Map map = new HashMap();
MyModel model = (MyModel) obj;
map.put("bigdecimal_field", model.getBigDecimalField());
map.put("date_field", model.getDateField());
map.put("boolean_field", model.getBooleanField());
return map;
}

@Override
public String getFindAllStatement() {
return "SELECT * FROM :@ ";
}

So, that I wanted to switch between databases without changing code means that I wanted to switch without changing my DAO classes.

For SQLite, I used the xerial-jdbc-sqlite driver. I talk about drivers because there are some things that might be driver-specific when solving some problems; so when I say ‘SQlite does it this way’, I generally mean ‘xerial-jdbc-sqlite driver does it this way’.

Now, let’s start.

WARNING: Some of the solutions I give here fit into my framework, but might not directly fit into your code. It’s up to you to imagine how to adapt what I provide here.

DATA TYPES

Since there are some differences between PostgreSQL and SQLite regarding data types, and I wanted to continue to access database values through the regular ResultSet interface, I had to have some mechanism to intercept the call to, for instance, resultset.getDate(“date_field”). So I created a ResultSetWrapper class that would redefine the methods I was interested in, like this:

public class ResultSetWrapper implements ResultSet {

// The wrappped ResultSet
ResultSet wrapped;

/* I will use this DateFormat to format dates. I'm assuming an SQLite style pattern. I should not */
SimpleDateFormat df = new SimpleDateFormat("yyyy-mm-dd");

public ResultSetWrapper(ResultSet wrapped) {
this.wrapped = wrapped;
}

/* Lots of ResultSet methods implementations go here,
but this is an example of redefining a method
I'm interested in changing its behavior: */

public Date getDate(String columnLabel) throws SQLException {
Object value = this.wrapped.getObject(columnLabel);
return (Date)TypesInferreer.inferDate(value);
}
}

The getDate() method in ResultSetWrapper relies on TypesInferreer to convert the value retrieved to a Date value.

All data types convertions would be encapsulated inside TypesInferreer, which would have methods to convert from different data types as needed. For instance, it would have a method like this one:

public static Object inferDate(Object value) {
java.util.Date date;

// Do convertions here (convert value and asign to date)

return date;
}

Which tries to convert any value to a Date (I’ll show the actual implementation further).

Now, instead of using the original resultset retrieved from saying preparedStatement.executeQuery(), you use new ResultSetWrapper(preparedStatement.executeQuery()). That’s what my framework does: it passes this new resultset to DAO objects.

Now let’s see some type conversions.

Mixing PostgreSQL Date and SQLite Long/String

You could store Date values as text in a SQLite database (eg. ’2013-10-09′); this you can do manually when creating the database, but when SQLite stores a Date object, by default it converts it to a Long value. There is no problem with this when saving the value to the SQLite database, but if you try to retrieve it using resultset.getDate(“date_field”), then things get messy; It simply won’t work (CastException).

How do you access Date values, then? You create this method in TypesInfereer, which covers both String and Long variations:

public static Object inferDate(Object value) {
java.util.Date date = null;
if(value == null) return null;
if(value instanceof String) {
try {
date = df.parse((String)value);
} catch (ParseException ex) {
// Deal with ex
}
} else if(value instanceof Long) {
date = new java.util.Date((Long)value);
} else {
date = (Date)value;
}
return new Date(date.getTime());
}

And as you saw, the getDate() function in ResultSetWrapper is redefined like this:

@Override
public Date getDate(String columnLabel) throws SQLException {
Object value = this.wrapped.getObject(columnLabel);
return (Date)TypesInferreer.inferDate(value);
}

Now all DAOs can retrieve Date values from both databases indistinctly, using resultset.getDate(“date_field”).

Mixing PostgreSQL Numeric and SQLite Integer/Double/…

My SQLite driver didn’t implement the getBigDecimal() function. It complained like this when I called it: java.sql.SQLException: not implemented by SQLite JDBC driver.

So I had to come up with a solution that was valid for both PostgreSQL and SQlite. This is what I did in ResultSetWrapper:

@Override
public BigDecimal getBigDecimal(String columnLabel) throws SQLException {
Object value = this.wrapped.getObject(columnLabel);
return (BigDecimal)TypesInferreer.inferBigDecimal(value);
}

But value would get different types depending on the actual value stored in the database; it could be an Integer, or a Double, or perhaps something else. I solved all the cases by doing this in TypesInfereer:

public static Object inferBigDecimal(Object value) {
if(value == null) return null;
if(value instanceof BigDecimal == false) {
return new BigDecimal(String.valueOf(value));
}
return value;
}

Anyway, the String constructor of BigDecimal is the recommended one, so everything’s fine with this. Now you can retrieve BigDecimal values using resultset.getBigDecimal(“bigdecimal_field”) from both databases.

Mixing PostgreSQL Boolean and SQLite Integer

SQLite doesn’t have boolean values. Instead, it interprets any other value as boolean by following some rules. When SQLite saves a Boolean value to the database, it saves it as 0 or 1 for false or true respectively. Also, because drivers can interpret any value as boolean, you can use resultset.getBoolean(“boolean_field”) and it will work as expected by the rules.

But the problem I faced was when creating filters. If a value for true is stored as 1 in the SQLite database, you can’t expect the clause WHERE boolean_field = true to work. You will never find a match. Instead, you should have said WHERE boolean_field = 1.

In my app, I created filters like this:

dao.addFilter(new FilterSimple("boolean_field", true));

Now I needed FilterSimple to infer that, for SQLite, I meant 1 instead of true. So I created what I called a DatasourceVariation. These are objects that are specific for each type of database and are used accross all data accesses, by DAOs, Filters, and other objects. These objects would take care of managing all my cross-database incompatibilities, including:

  • The way to reference a database object: in PostgreSQL you must prepend the schema name to every database object you refer in your queries. In SQLite you don’t.
  • The way to manage exeptions: explained further in this post.
  • The way to backup and restore data: explained further in this post.
  • Expressing BETWEEN clauses: Explained further in this post.
  • And also, infering boolean values.

For VariationSQLite, I did this:

@Override
public Object getReplaceValue(Object value) {
if(value instanceof Boolean) {
if((Boolean)value == true) return new Integer(1);
else return new Integer(0);
}
return value;
}

Now we can say dao.addFilter(new FilterSimple(“boolean_field”, true)) for both databases, assuming that FilterSimple uses the variation to adapt the value before constructing the clause.

RETRIEVING AUTOGENERATED KEYS

When you have autonumeric fields (eg. Serial), in PostgreSQL you can specify a RETURNING clause at the end of an INSERT statement to automatically retrieve the values of autogenerated fields by doing this:

PreparedStatement pstm = conn.prepareStatement(queryWithReturningClause); // ex. select * from table_x returning field_x
ResultSet rs = statement.executeQuery();
if(rs.next()) {
// Get autogenerated fields from rs
}

But that won’t work with SQLite. In SQLite, retrieving autogenerated fields conveys a process that goes from creating the statement, executing the query and explicitly asking for the generated values. Like this:

PreparedStatement pstm = conn.prepareStatement(queryWITHOUTreturningClause, Statement.RETURN_GENERATED_KEYS);
pstm.executeUpdate();
ResultSet rs = pstm.getGeneratedKeys();
if (rs != null && rs.next()) {
// Get autogenerated fields from rs
}

The good news is that this code works both for PostgreSQL and SQLite, so I replaced my previous code for this, and didn’t have to make any distinction between databases.


ENFORCING FOREIGN KEYS

You’d think that using a REFERENCES table_name(field_name) clause when creating a SQLite database table makes foreign keys to be checked when deleting, updating, etc. You’re wrong!

Foreign keys are not enforced in SQLite by default. You have to explicitly say it, and it’s done when creating the connection (WARNING: This is very driver-specific):

SQLiteConfig config = new SQLiteConfig();
config.enforceForeignKeys(true);
Connection conn = DriverManager.getConnection("jdbc:sqlite:" + dataSourcePath, config.toProperties());

For PostgreSQL it’s different, so you better have a connection pool for each type of database, and decide which one to use at runtime. My framework does exactly that.

NOTE: If you are capable of getting the connection depending on the database type, then you can enforce foreign keys transparently for both databases (for PostgreSQL it happens naturally without extra code). For instance, you could have an abstract getConnection() method, and each database’s connection pool would return the connection in its own way.

MANAGING EXCEPTIONS

I had defined some different types of database exceptions in my framework: ExceptionDBDuplicateEntry, ExceptionDBEntryReferencedElsewhere, etc, which would be thrown and raised to upper layers in my architecture. For PostgreSQL, these exceptions directly mapped to some constant codes (which normally are vendor/driver specific): UNIQUE_VIOLATION = “23505″, FOREIGN_KEY_VIOLATION = “23503″, etc. So, for PostgreSQL, I managed database exceptions something like this:

private void manageException(SQLException ex) {
if (ex.getSQLState() == null) {
ex = (SQLException) ex.getCause();
}
if (ex.getSQLState().equals(UNIQUE_VIOLATION)) {
throw new ExceptionDBDuplicateEntry();
} else if(ex.getSQLState().equals(FOREIGN_KEY_VIOLATION)) {
throw new ExceptionDBEntryReferencedElsewhere();
} else {
DAOPackage.log(ex);
throw new ExceptionDBUnknownError(ex);
}
}

That won’t work for SQLite, obviously! So, what I did was move the database exceptions management to the DataSourceVariation. The VariationPostgresql class would have a method similar to the one above. For VarialtionSQLite, I did sort of a hack, but it’s something that has worked until now (maybe until I change my driver).

@Override
public void manageException(SQLException ex) {
// This is a hack (is it???)
String message = ex.getMessage().toLowerCase();
if(message.contains("sqlite_constraint")) {
if(message.contains("is not unique")) throw new ExceptionDBDuplicateEntry();
else if(message.contains("foreign key constraint failed")) throw new ExceptionDBEntryReferencedElsewhere();
else {
DAOPackage.log(ex);
throw new ExceptionDBUnknownError(ex);
}
} else {
DAOPackage.log(ex);
throw new ExceptionDBUnknownError(ex);
}
}

FIXING BETWEEN CLAUSE

The problem with the BETWEEN clause appeared while using a filter like this: 

dao.addFilter(new FilterBetween("date_field", date1, date2)); // date1 and date2 are java.util.Date objects

FilterBetween would create a BETWEEN clause by formatting Dates as Strings, normally with the format ‘yyyy-MM-dd’ (although this should be configurable). Since dates in SQLite are long values, we can’t create a clause like date_field BETWEEN ’2013-01-01′ AND ’2013-02-01′. It had to be something like date_field >=1357016400000 AND date_field <= 1359694800000.

So, I moved the creation of BETWEEN clauses to…. that’s right, to DataSourceVariation. VariationSQLite does it like this:

@Override
public String getBetweenExpression(String fieldName, Object d1, Object d2) {
String filter = "";
try {
Date dd1 = null;
Date dd2 = null;

SimpleDateFormat df = new SimpleDateFormat("yyyy-mm-dd"); // Remember, this should be configurable
if(d1 instanceof String) dd1 = df.parse((String)d1); else dd1 = (Date)d1;
if(d2 instanceof String)dd2 = df.parse((String)d2); else dd2 = (Date)d2;

filter = fieldName + " >= " + dd1.getTime() + " AND " + fieldName + " <= " + dd2.getTime();
} catch (ParseException ex) {
DAOPackage.log(ex);
throw new ExceptionDBUnknownError(ex);
}
return filter;
}

CONCLUSIONS

As you can see, there are many intricacies when making an app support multiple database types. All I did here was only to support PostgreSQL and SQLite, but who knows what is needed to support other databases at the same time too. You can’t expect JDBC alone will do all the work, so be prepared to solve some problems (and another problem, and another, …) to make a database migration. And please, share your journey. 

Customer vs. Programmer centric configuration

Some programmers think of software as all-configurable systems that can be adapted to (almost) every variation of the domain they cover, by modifying some configuration resources through a user interface and without touching the code. This is a customer-centric configuration approach, because it is focused mainly on giving the costumer the capability to modify the system.
This approach makes up a programmer’s mind to think up-front in a design that can cope with this requirement. They try to come up with a database schema that is so malleable that tables and columns are generated automatically. They spend precious time solving the hard task of making everything generic so it can be applied to every case imaginable, sometimes putting rare names on things.
I prefer to see a software configuration as a whole new subsystem that, attached to the current system, is able to change the software behavior. This new subsystem may include new database tables, new configuration files, and more importantly, new code. And new code is the main thing the costumer-centric approach isn’t able to see as part of the configuration process, because it is mainly what that approach tries to avoid.

ccc vs pcc

A programmer-centric configuration approach focuses on programmers as the ones in charge of configuring an app, by using any means they have at their disposal, including code. So a programmer’s task becomes not in thinking a generic system, but in doing something that works for some cases, while identifying possible points of change for others and being ready to make any adjustments right away.
So, you start to think of an app’s design not only as a parameterizable system, but as a pluggable system too. I consider plugins as configuration resources. And plugins can have their own data, resources, code, user interfaces, etc.
To achieve this, you don’t have to be a master-mind, you only have to attain to some principles (Open-Close, mainly) and use all the tools and knowledge at hand.
An extra bonus of this approach is that responsibility moves now to the programmer’s shoulders, which leads to users with fewer options, and less decisions to make. Anyway, with heavy-configuration systems you never detach from costumers, because they never learn how to use your software properly.
I presume that customer-centric configuration dates back to the times when there was no continuous deployment, and changes to the code could not reach costumers instantly. When you’re configuring an app through a user interface, you are deploying. So are you when you are adding code to a codebase and tables to a database to cope with a costumer requirement, with the intention of deploying it immediately. The former approach used to be faster, but not anymore now that continuous deployment has appeared.
Other good advantages are:

  • You can still apply agile techniques, without having to design up-front.
  • You can use names that resemble better the domain you are dealing with.
  • You keep yourself sane for a longer time.

Manager Classes

When I introduced the Easy-Bind mechanism, our team started to (ab)use it for almost every user interaction dynamics we wanted to describe. As some interactions started to get more complicated, the abstraction started to leak; we had to make some tricks in order to describe some complex things like setting/wiring up a command to an event dynamically (depending on some previous user choice, for example), and later deciding to replace it with another (put an example of when this may happen).
So I decided I had to take the next evolutionary step to improving the mechanism. The solution: a manager class.
This manager class would be in charge of providing a smooth interface to achieve some things with event listeners, from adding listeners of any supported type to visual components, to enabling/disabling them, to switching between listeners.
Now let me demonstrate how things changed with this new manager class through some examples:
1- Before, we attached a listener to a visual component by directly typing:

component.add[Some-Type]Listener(listener); // Some-Type = {Action, Key, Mouse, Focus, ...}

Now we do:

EventsListenerManager.registerListener("some unique name", component, listener);

There are two things to highlight here: we remove the explicit specification of [Some-Type] and have a single function for every type of listener; and we define a unique name for the binding, which will be used as an identification to remove, enable/disable, switch, etc.
2- Before, to remove a listener we had to hold a reference to the listener (which adds complexity only to accomplish a task that should be simple… thanks Java) and type:

component.remove[Some-Type]Listener(listener);

Now we do:

EventsListenerManager.unregisterListener("some unique name");

No need to hold a reference.
3- Before, to switch a listener we would first remove it (remember to hold a reference) and then add the new one.
Now we do:

EventsListenerManager.switchListener("some unique name", newlistener);

4- Before, enabling and disabling a listener was faked by adding and removing it, possibly incurring in not needed listeners constructions:
Now we do:

EventsListenerManager.enableListener("some unique name");
EventsListenerManager.disableListener("some unique name");

Obviously, it is now convenient to always use the manager’s interface, since it reduces the need to trick our code and gives us a simple and more readable interface to achieve many things with listeners. But that’s not the only advantage of this manager class.
You see, manager classes in general have one big advantage: they create an abstraction of a service and work as a centralized point to add logic of any nature, like security logic, performance logic, etc.
In the case of our events listeners manager, we put some extra logic to enchance the way we can use listeners by providing an interface that simplifies their use: it is easier to make a switch between two listeners now than it was before.
Another thing we could have done was to restrict the number of listeners registered in one component for performance sake (go study the listeners execution model in Java. Hint: they don’t run in parallel).
So manager classes seem to be a good thing, but we can’t make a manager for every object we want to control in an application. This would consume time and would make us start thinking in a manager even when we don’t need it. But we can discover a pattern for the need of defining a manager; I’ve seen them used when the resources they control are expensive, like database connections, media resources, hardware resources, etc. Expensiveness can come in many flavors:

  • Listeners are expensive because they can be a bottleneck in performance (we must keep them short and fast) and may be difficult to play with.
  • Connections are expensive to construct.
  • Media resources are expensive to load.
  • Hardware resources are expensive because they are scarce.

So we create different managers for them:

  • We create a manager for listeners to ease and control their use.
  • We create a connections pool in order to limit and control the number of connections to a database.
  • Games developers create a resources manager to have a single point to access resources and reduce the possibility to load them multiple times.
  • We all know there is a driver for every piece of hardware. That’s their manager.

Here we can see multiple variations of manager classes implementations, but they all attempt to solve a single thing: reducing the cost we may incur in when using expensive resources directly.
I think people should have this line of thought: make a manager class for every expensive resource you detect in your application. You will be more than graceful when you want to add extra logic and you only have to go to one place. Also, enforce the use of manager objects instead of accessing the resources directly.

Avoiding Java’s (Leaky) Checked Exceptions

I remember that when we started to face Java’s exceptions model we had a lot of troubles, which I want to share with you…
It all started when I introduced the easy-bind mechanism I wrote about before, specifically with the ICommand interface. As a reminder, here is the interface:

public interface ICommand {
   public void execute();
}

The simplicity of this interface brings the advantage that it may be a universal interface. I call a universal interface to a one that can be copied and pasted to every project without modification. They have the particularity of expressing a concept that can not be changed/reduced, at least without affecting the concept itself. In the case of the ICommand interface, it models and action that can be executed.
But this universality status of the ICommand interface was put into consideration when we started to implement that interface, due to the Java’s exception model.
It turns out, we had made the database-related exceptions intentionally checked (since no one wants these exceptions to flow out of the application without being noticed) and our database access API would explicitly throw these exceptions when we invoked some of their methods. (Put examples).
Commands in the easy-bind mechanism were thought to directly bind user interface events/actions to some processing, generally a database-related processing. Naturally, some commands were very liable to use the database API and invoke methods that threw checked exceptions. Right away, the ‘magic’ of Java’s exception handling model started to emerge.
As we coded some commands, our (Netbeans) IDE -not to say the compiler- complained about the necessity to capture those checked exceptions or to rethrow them. What’s worse, this always happened inside the implementation of the execute() method.
But look at the ICommand’s execute() method’s signature again. It doesn’t have an explicit exception throwing on it. So, we see ourselves forced to manage those exceptions inside the execute() method, just not to break the interface signature.
Of course, we immediately realized that commands are not meant to manage database exceptions (which involves rolling back all the operations), and this is reinforced by the fact that these specific commands were meant to be binded to a user interface event, something that normally happens asynchronously, and the code in the commands is unreachable by the client code using them. So something bad was happening here.
One solution was to manage the database exception inside the execute() method, and rethrow a runtime exception wrapping the original (which saves us from having to modify the execute() methd’s signature), but with this approach we would lose our intent of enforcing a proper database-related exceptions handling outside commands.
The other solution didn’t seem to constitute a major problem for many people on the team, since we could always adjust the ICommand interface to rethrow database-related exceptions, and commit ourselves to adjust that interface everytime an implementation threw a new type of checked exception.
But instantly this solution poped an alert in my head, since we were violating the universal nature of the command interface. We were tightly coupling ourselves to the implementations of the ICommand interface. We would never be able to trust the ‘program to interfaces, not to implementations’ principle. It even broke all my understanding of designing interfaces in the first place; why should I do it thinking on their possible implementations? Besides, that created a lot more troubles, like the leakage of unhandled exceptions, or the necessity of updating all the client code that used interfaces that were forced to change their methods signatures; this problem rapidly scales until it is unmanageable.
I coudn’t live with that in my mind and started to make some research on checked versus unchecked exceptions in Java. What I found out is that these problems had already been described and discussed (put some links and examples of problems here), but couldn’t find concrete solutions, since this ultimately depends on the concrete application. Furthermore, a lot of more problems were also described, which led me to wonder why checked exceptions were invented in the first place (of course they have a reason, but I’ll leave that for further in this post).
So, I prepared a report explaining why I thought rethrowing checked exceptions and updating methods signatures in public interfaces couldn’t be in any way a good approach. My arguments at that time were the following:

  • Interfaces MUST be designed without knowing anything about their possible implementations.
  • Interfaces should put restrictions on their implementations, not the other way around: an implementation can not push a change in an interface.
  • Once interfaces are put publicly, you must commit yourself to not changing them, otherwise, many bad thing might happen on client code.
  • In a multi-layered application (our case), we need to let the top layers be noticed about errors in the bottom layers, but we should not force the layers in between to handle those errors, or even know about them.
  • Letting all exceptions be seen as they travel their way up to the top layers is a leak in the abstraction that layers are supposed to provide.
  • Programmers were starting to just rethrow all exceptions that got in their way to implementing some functionality; why should they bother about exceptions that…
  • There are some exceptions that need to be handled in lower layers, since those layers are the only ones with the knowledge to do that. For instance, database-related exceptions need to be handled in a class that is able to rollback a transaction, which should normally be the one that starts the transaction. Anyway, a notification of the error should be sent to the top layers, for example, to show an alert.

I then proposed some solutions based on some principles/guidelines. These principles are basically conventions on the way exceptions should be handled. These conventions had to do with putting a try-catch statement in certain places in the application where certain services are invoked; or make certain kind of exceptions checked or unchecked, etc. Here they are:

  • Database exceptions would be checked (yeah, checked), but they should be handled/cought in some intermediary layer (the Service layer in our case), which would wrapp it inside an unchecked exception that would be (invisibly) rethrown up to the higher layers.
  • Write an explicit try-catch statement in any place of the top layers where a service in the intermediary layers was invoked, and would be able to do something with that exception (e.g. notify the user).
  • Take advantage of checked exceptions where we can. For instance, they are good when used in self-coupled modules/subsystems where all checked exceptions are used internally, but none (or perhaps a few) goes out.
  • Make domain exceptions checked to force their handling. In better words, we would turn every domain rule violation into a checked exception. This is due to the fact that domain exceptions/rules can appear anywhere without anticipation, so no conventions can be made to ensure that they are cought somewhere, hence the enforcement.
  • Make exceptions that could not be handled or recuperated from (app-crasher, e.g. programming mistakes or mandatory files corrupted) not checked, catch them only at certain well defined places in the application, and treat them there. Close the application in these cases.

So, the final solution was a mix of checked and unchecked exceptions, only that with some conventions. This solution proved to be a good one, since our work became smoother even by sticking to some SIMPLE conventions -I think simplicity was key here.
Lessons:

  • Handling exceptions is a matter of conventions. There is not a definite solution yet.
  • Conventions can be made only when we can anticipate what will happen. For instance, with database exceptions we can define conventions, but not with domain exceptions, since those can be of multiple nature and can appear anywhere; with these exceptions, we must force their handling by making them checked.
  • We should try to minimize the use of checked exceptions, but they are advantageous in some cases.

Easy Bind

In this post I’m going to explain a mechanism we introduced in a Java framework we’ve been harvesting. I’m going to go through it by using mind experiments. That is, my starting point for each step is going to be a hipothetic situation (or was it real?), which solution will lead to the emergence of the mechanism.
Let’s consider the following situation: we have a window in which a user is going to enter some data. The user is supposed to enter, for instance, a person’s id number, then press ‘Enter’ and the name of the person is displayed on a label next to the field where he entered the id.
Now let’s analyze the entire computation flow of what happens in this simple interaction. First, the action is triggered with a key press (‘Enter’). Then some computations occur to retrieve a Person object from a database, of which ultimately its name is shown on a label in the screen.
We could model that flow in the following way (I’ll go directly to what matters):
Let’s have a command object encapsulate the computation needed to retrieve the person from the database. A command object only has one execute() method that doesn’t receive any parameters; an assumption is that the command object is constructed with all the data needed to carry out its job, in this case, retrieve a person given its id number. The command would implement the following interface:

public interface ICommand {
   public void execute();
}

And this is an excerpt of the command to retrieve an object from a database:

public class CommandFindObjectByCodeField implements ICommand {
   // The field to extract the code from
   JTextComponent field;
   //The 'alias' of the entity to be retrieved
   String entityAlias;

   @Override
   public void execute() {
      // Get data from text field
      // Get DAO for entity using its alias (through a DAO Factory)
      // Find object using the DAO
   }
}

In fact, this command can retrieve from the database any type of object because it is parameterized with the alias of the object we want to retrieve (our framework is configured to work that way).
That’s one part. The next is to detect the key press with a listener registered on the field where the user enters the id number (using the Java’s AWT libraries); let’s also make the listener hold a reference to a command object in order to execute it when the key is pressed; and we are also going to parameterize the listener with the code of the key we want it to check as valid (in our case the valid key is ‘Enter’). The listener is something like this:

public class ExecuteCommandOnCustomKeyPress implements KeyListener {
   // The key to be checked
   int keyCode;
   // The command to execute
   ICommand command;

   @Override
   public void keyPressed(KeyEvent e) {
      if(keyCode == e.getKeyCode()) {
         command.execute();
         e.consume();
      }
   }
}

Up to here we can register a listener object in the field (named txtPersonId) like this:

txtPersonId.addKeyListener(new ExecuteCommandOnCustomKeyPress(KeyEvent.VK_ENTER, new CommandFindObjectByCodeField(txtPersonId, "person")));

That way the command will execute after every ‘Enter’ key press.
But how do we know the person that was retrieved in order to show its name? Let’s have this interface, which receives an object to do something with it (like showing a description on a label):

public interface IObjectReceiver {
   public void receiveObject(Object o);
}

And put a reference to an object that implements this interface in the command object. For that, we will modify the CommandFindObjectByCodeField like this:

public class CommandFindObjectByCodeField implements ICommand {
   // The field to extract the code from
   JTextComponent field;
   // The 'alias' of the entity to be retrieved
   String entityAlias;
   // The receiver
   IObjectReceiver receiver;

   @Override
   public void execute() {
      // Get data from text field
      // Get DAO for entity using its alias (through a DAO Factory)
      // Find object using the DAO

      receiver.receiveObject(o);
   }
}

Our window could implement the IObjectReceiver interface to make itself a receiver of the object retrieved by the command:

public class OurWindow implements IObjectReceiver {
   @Override
   public void receiveObject(Object o){
      // Do something with the object (which will be a Person object): show its name.
   }
}

Now we must create the command with a reference to the window object. Assuming that the window has the code to create the listener registration, we would end up with:

txtPersonId.addKeyListener(new ExecuteCommandOnCustomKeyPress(KeyEvent.VK_ENTER, new CommandFindObjectByCodeField(txtPersonId, "person", this)));

This way, every time the command is executed after pressing ‘Enter’, the window object receives a notification with the person retrieved (or null if it’s not found). At this moment, the name of the person can be displayed.
Now let’s continue to add functionality to our window. We want that if the user doesn’t know the id number of the person, he could press ‘F1′ to show a popup window with a list of all the persons available. The user could then select one person in the list, press ‘Enter’ and again the main window would display the name of the person on the label.
This interaction is similar to the case when the user enters the id number directly, only that the person is retrieved differently (it takes more steps). Having this in mind, let’s encapsulate the new way of retrieving a person inside another command object; an object of the following class:

public class CommandShowWindowWithObjectList implements ICommand {
   // A type of window that can hold a list of objects and display a table with them
   IObjectListDisplay window;
   // The 'alias' of the entities to be retrieved
   String entityAlias;

   public void execute() {
      // Get DAO for entity using its alias (through a DAO Factory)
      // Find list of objects using the DAO
      window.setListOfObjects(list);
      window.show();
   }
}

We are going to create a second window that implements the IObjectListDisplay interface: that’s our popup window with the list of objects, which holds a reference to an IObjectReceiver object to notify it once the user selects an object from the list:

public class OurPopupWindow implements IObjectListDisplay {
   // The receiver
   IObjectReceiver receiver;
   // The table to show the list
   JTable table;

   public void selectObject(int index) {
      // Obtain object with index from the table
      receiver.receiveObject(o);
      // Close (optional)
   }
}

Now the way to register a listener for this case is like this:

txtPersonId.addKeyListener(new ExecuteCommandOnCustomKeyPress(KeyEvent.VK_F1, new CommandShowWindowWithObjectList("person", new OurPopupWindow(this))));

The only difference here is that we parameterized the listener with the ‘F1′ key and passed a new command.
Notice that what we have created here is a mechanism for encapsulating some computation (normally related to a database access) inside command objects, and we have a key listener for executing these commands. We have also managed to notify the result of the computation to another object (the main window in this case) by using a very simple interface that does exactly that: receive notifications. Notifications are performed by the objects capable of doing it: in the first case was the command object itself, but in the second case we had to delegate that responsibility to the popup window.
But nothing stops us from borrowing the idea of the ExecuteCommandOnCustomKeyPress listener and creating other ‘species’ of listeners that execute commands under different circumstances as well; we could think of implementing listeners to execute commands when a button is pressed, or when a component gains focus or is double clicked, and many others. Since the availability of listeners is finite and very narrowed, we could have the whole family of listeners that cover all possible events!!!
For instance, if we wanted to show the popup window also when a button is pressed, we would do the following:

btnShowPersonsList.addActionListener(new ExecuteCommandOnAction(new CommandShowWindowWithObjectList("person", new OurPopupWindow(this))));

What changes here is the type of listener we are using.
Now suppose that in the same window we must add car orders that consist of specifying one Car and an amount to be bought. We have three buttons to ‘Add’, ‘Modify’ or ‘Delete’ an order, or we can select one order and press ‘Enter’ to modify it, or ‘Del’ to delete it.
For the computation of the actions we are going to keep the line of encapsulating it in command objects. But in this case, notifications are different because we must indicate the type of action being executed: adition, modification or deletion. For this, we are going to use a different interface that receives all these three types of notifications:

public interface ITableObjectChangeReceiver {
   public void addObject(Object o);
   public void updateObject(int index, Object newo);
   public removeObject(int index);
}

The main window would also implement this new interface, like this:

public class OurWindow implements IObjectReceiver, ITableObjectChangeReceiver {...}

And We are going to create different commands, each for one type of action. For example, the following is an excerpt of a command to delete an object from a table.

public class CommandDeleteTableRow implements ICommand {
   // The table to detect the selected row
   JTable table;
   // The receiver
   ITableObjectChangeReceiver receiver;

   @Override
   public void execute() {
      // Find row object selected
      receiver.removeObject(rowSel);
   }
}

Then we can solve the deletion of car orders like this:

btnDeleteCarOrder.addActionListener(new ExecuteCommandOnAction(new CommandDeleteTableRow(tblCarsOrders, this)));

tblCarsOrders.addKeyListener(new ExecuteCommandOnCustomKeyPress(KeyEvent.VK_DELETE, new CommandDeleteTableRow(tblCarsOrders, this)));

So far we haven’t talked about possible implementations of the interfaces IObjectReceiver and ITableObjectChangeReceiver. Of course, these implementations are pretty straightforward, but there are cases where it’s better to enhance our model in order to cover some cases that could make it difficult (if not impossible) to make a proper implementation.
Consider the case where, besides a person, we want to enter the code of a destination (a City object) and retrieve it from the database to show its description on a label when we press ‘Enter’.
We could achieve this the same way we have done so far, only changing tiny pieces:

txtDestCode.addKeyListener(new ExecuteCommandOnCustomKeyPress(KeyEvent.VK_ENTER, new CommandFindObjectByCodeField(txtDestCode, "city", this)));

You could implement the IObjectReceiver interface like this:

public void receiveObject(Object o) {
   if(o instanceof Person) // Treat o as a person
   else if(o instanceof City) // Treat o as a destination
}

But this implementation has flaws. In a situation where you can receive objects of the same type but with different semantics (e.g. Person objects representing a client or a provider) you can’t just ask for their type; you must also know the semantics of the object being received: it’s a Person object but, is it a client or a provider?.
To solve this we will introduce another interface:

public interface ISimpleStateMachine {
   public void setState(int state);
}

OK, I agree. This interface seems very simple, but its potential is in the way of using it. What if we could ensure that every time the user sets ready to fill one data, we know the ‘type’ of the data he will be filling in advance (no way, a magic ball?-you’d say).
Let’s suppose the user will be entering codes for a client, a buyer and a deliverer in three different fields. We create three states in our main window, one for each type of data:

// States with different values (not necessarily adjacent)
final static int FILLING_CLIENT = 2;
final static int FILLING_BUYER = 5;
final static int FILLING_DELIVERER = 11;

The main window implements the interface ISimpleStateMachine:

public class OurWindow implements IObjectReceiver, ITableObjectChangeReceiver, ISimpleStateMachine {
   @Override
   public void setState(int state) {
      this.state = state;
   }
}

And we create the binds like this:

txtClientCode.addFocusListener(new ExecuteCommandOnFocus(new CommandSetState(this, FILLING_CLIENT)));

txtBuyerCode.addFocusListener(new ExecuteCommandOnFocus(new CommandSetState(this, FILLING_BUYER)));

txtDelivererCode.addFocusListener(new ExecuteCommandOnFocus(new CommandSetState(this, FILLING_DELIVERER)));

// (...) Other binds here

That’s it. If you haven’t seen it already, I will explain: we are setting a command that puts our main window in different states depending on the component that gains focus.
The key is in recognizing that the first event that fires when the user sets ready for filling one field is the focus-gained event. We take advantage of this by attaching a command that will notify the main window about the ‘type’ of the data to be filled. Simple and powerful!
Now we would write this version of the implementation of the IObjectReceiver interface:

public void receiveObject(Object o) {
   if(inState(FILLING_CLIENT)) // Treat o as a client
   else if(inState(FILLING_BUYER)) // Treat o as a buyer
   else if(inState(FILLING_DELIVERER)) // Treat o as a deliverer
}

And that’s pretty much it. What we ended up with here is a bunch of listeners (hopefully the whole family), a bunch of commands (you can implement infinite commands in fact) and two interfaces through which notifying the results of a computation plus one to control states transitions. All this composes a mechanism that I’ve come to call Easy Bind. In a further post I’ll be talking about this mechanism in general.
Now I’ll leave you with a little reinforcement excercise: if you understood all that, you should have no problem in figuring out what the next lines of code do, and a possible implementation for each component:

public class OrderScreen implements ITableObjectChangeReceiver {
   @Override
   public void updateObject(int index, Object newo) {
      printDocument(o);
      ((CustomTableModel)tblNotPrinted.getModel()).removeRowObject(index);
      ((CustomTableModel)tblPrinted.getModel()).addRowObject(o);
   }

   / /Other method implementations here

   public void someInitializationMethod() {
      tableNotPrinted.addMouseListener(new ExecuteCommandOnDoubleClick(new CommandDoStuffWithTableRow(tableNotPrinted, this) {
         @Override
         public void doStuff(int index, Object o) {
            receiver.updateObject(index, o);
         }
      }));
   }
}

Hello (software) world!

This is the first post of this blog. Considering the reasons this blog was created for you would think the first post should be epic. Well, it isn’t.
The fact is, there won’t be any epic posts about how software development should be done in Cuba, or about how it should be viewed by the people that dictates laws; or an inspiring post inviting developers to join an army and go to a war. NO.
It was firmly settled up that posts are going to be about software development, and only that. Showing our work is the best way to accomplish things.
Once that said, I must mention that I have a stack of things written about software development and it’s hard to decide which one to start with.
I think it would be good if I started with some thoughts on one of my latest works and put some of my older thoughts in between. But before I do this, I have to make some previous arrangements for you to understand everything. Here it goes:
About a year ago my company started introducing in its development mainline the Java technology. As a new technology, we didn’t have much understanding of it and there was no codebase to develop on top of. As we developed some projects, we made a lot of mistakes in designing classes and modeling an architecture, but we also learned a lot about the Java language and libraries. At the beginning, the Java language makes a huge friction in projects advances, mainly because its very own architecture forces you to design in certain way, so that you can take advantage of the things already done only if you follow such design.
As we continued to code and adapting to the Java way, our own codebase started emerging. We now have a framework over which to develop our projects, of which I would like to highlight some of the thoughts behind some of its features; my following posts are going to be about this.