Monday, February 23, 2015

Dealing with technical debt

Projects are growing larger, people come and go while budget and deadlines are always there leading people to weight things like analysis and code quality in terms of time and profit. Even if you have the perfect project manager who understands that the impact of poor code in time will probably result in hard-to-manage problems in production systems, you will always find some remnants of your i-dont-want-to-go-to-work days in the code. Moreover, bad design, inaccurate analysis and just-make-it-work technology upgrades will make you often think why things were done in that way, or make you mumble over an aspect of the system that doesn't make any sense. Then comes the time when you need to reconsider some things from scratch, re-analyze, re-design and re-implement even if the impact and effort is huge and things that used to work are subject to break.

Choose the right time

Along with the minor features that a customer requests from time to time, sometimes what he needs requires some vast changes in the code which consequently indicate what needs to be retested. That's the key point; talk with the testing team and find out which paths are going to be tested along with their assumptions in the course of that. Then you 'll have an umbrella under which you could do some major rewrites.

Choose what to revisit

Go grab the persons which have the closest to the full picture of the project into their minds. Talk with them and find out what technical problems could be addressed along with the tested functionalities. Re-discuss with the testing team if the above provided boundaries need to be crossed. Estimate and go provide your customer the deadline. If agreed, move to the next step, if not revisit and descope.

Choose the right person

Go find someone who isn't a part of the project team. Someone who doesn't know  the aspects of the system which are not touched because of their complexity and because of the fear that changing them may cause. Choose someone who wouldn't spend time to think on why some parts were done wrong, but he will rewrite them on the fly without asking. It would be better if that person is psychotic enough to clear any single warning in the code and he has some opinion on how things should be done. Finally ensure that this person gets quick answers from the analysis and others may help clear his way when he thinks that some functionality needs to be modified in order to have better results. Inform the project team that under those boundaries, everything is questionnable. Then release the lunatic and leave him alone providing him the deadline he should meet before others are going in to implement the newly requested functionalities.

Nearly, at the end of that task, schedule a meeting among the technical team and ask the guy to explain what he changed and to give advices to other people about what drew his attention.

Repeat this cycle in every non-trivial feature.

Thursday, May 8, 2014

Testing persistence with CDI-Unit and Hibernate.

In the project I' m involved with we 've had multiple tests to evaluate several JPA queries and persistent entities. Our initial approach was to test persistence with Arquillian but time pressure and the size of the development team led us with a daily build lasting about 20 minutes, and we didn't have time to figure out how to refactor our tests to run faster. Furthermore, we were keeping track of our persistent entities in persistence.xml, and due to the size of the test war, it was sufficient to have One New Entity To Break Them All.

Next step was to try with CDI-Unit; to have an EntityManager CDI producer backed up by an H2 database. We were also using EJB3Configuration to programmatically build our EntityManagerFactory. It was brilliant, persistent tests were running as if they were simple unit tests. 

Last week, we 've decided to go with Wildfly; EJB3Configuration wasn't there anymore and we looked for a replacement. After several hours of debugging as we found no documentation for this, we 've managed to make it work :

To cut a long story short, it worked by overriding :


Sunday, August 4, 2013

CUPS, Epson and iscan-data

I have a usb-attached Epson Stylus Office BX305F, running in a gentoo with 3.8.13 linux kernel. Recently I 've updated to cups-1.6.2 as cups-1.5.x versions were masked and I thought it was a good idea. Then, although cups usb backend was able to see the printer, any jobs I 've sent to printer queue were not printed with a 'waiting for the printer to become available' message.

I 've started investigating the problem, stopped using usblp and turned to libusb with no luck. Read almost every problem on upgrading from cups 1.5.x to 1.6.x, but I didn't find anything that fits in my case. My last resort was to completely unmerge iscan and iscan-data, the prioprietary Epson scanner driver. The problem then were solved, it seems that there is a problem with 99-iscan.rules. Excluding 'udev' use flag from iscan-data solved the problem.

Sunday, March 31, 2013

Notes on Activiti async and transaction internals

Having the business requirement for each business step to be a separate transaction I 've started using Activiti asynchronous continuations as a way to demarcate transactions in the desired way. Under Jboss AS 7, JTA transactions and stuff, I 've probably made a major mistake to place a CDI listener in the first transition of a subprocess in order to demote all the variables and attachments of the parent process to the subprocess. That subprocess was the first action of the top-level process. The second mistake was that, the call activity of the top-level process was not made asynchronous.

In this demoting-variables methods, the need of also demoting parent process attachments revealed some problems. To my understanding, RuntimeService.startProcessBy*() methods return a ProcessInstance entity when the process has reached a wait state or an async task. That operation is a new transaction initiated by JtaTransactionInterceptor, if no transaction is active at the moment, or a joined transaction otherwise. In my case, the startProcessByKey() method was called by an MDB so no new transaction was initiated. Finally, I was getting the ProcessInstance entity when the process has reached the first async service task in the subprocess.

After that, in the same transactional method, I was trying to store some attachments to the process having in mind that they will be demoted to every subprocess. However, due to omitting setting the first call activity to async, the attachments were pinned to the process but the demote listener wasn't finding anything. Setting that call activity to async, startProcessByKey() is starting the process which immediately writes an entry to ACT_RU_JOB table for the job executor to find it and returns the ProcessInstance entity. Note that at this point no transaction has been committed, so the job executor, which initiates its own transaction, cannot find anything to execute and advance the process forward. Then I am able to store the attachments and be sure that the attachments will be propagated.

Another misunderstanding was the false assumption that using a CDI-injected Activiti service (e.g. TaskService, RuntimeService) means that a new transaction will begin. As I 've seen in the JtaTransactionInterceptor that was not the case and it depends on the transactional context within which a method is called. Initially, to my understanding, being under JTA, it doesn't make any difference to store a variable through DelegateExecution or through RuntimeService. That is probably not the case, because RuntimeService searches for an ExecutionEntity which is not present at the point. So the most safe way is to get/set variables through DelegateExecution at this point.

The final misunderstanding was the use of a listener in the first transition of a subprocess; That I didn't understand from the first place is that Activiti stores entries in ACT_RU_EXECUTION table when it reaches a wait state/async task and it is about to suspend and wait for the user or the job executor. So in the first transition listener, when I 'm about to demote the variables to the subprocess, the respective subprocess execution entity doesn't exist in the transactional context. Trying an execution query to list all execution entities showed that only the parent process execution entity was in the transactional context. Attaching variables to that DelegateExecution entity didn't reveal the problem as it wasn't searching for an execution entity, but demoting attachments through TaskService really did.