Blog

Data validation on REST API: Don’t trust clients input

REST API’s have been becoming more and more common on Web applications/tools around the Internet, specially for those on open social context. It is also undebatable how the REST style architectures  have been growing in enterprise applications, probably as a possible and more light-weight alternative for architecture to expose services, and/or produce a consumer-based application. […]

Static Analysis: As part of reviewing what you are getting.

Last week I came across with a friend that I was long time without see him, during some talks and beers he pointed something that caught my eyes. He told me about the new strategy adopted by the company where he works. Due to a market situation, they need to outsource some projects, or even […]

Overengineering: Now, who pays the price?

Last week Felipe and I were talking about some situations that we came across, really similar situations, although we never worked before on the same company, we can see situations on software development around us that has the origin overengineering, specially the overengineering on detailed design stuff(I know that “design” is a huge topic, so […]

Technical Debt: How much have you been paying for that?

Long time ago Ward Cunningham brought the idea around the term “technical debt”[1]. After it others like Steve MaConnell[2] introduced more thoughts on this metaphor. It is quite interesting see this ongoing in a real scenario. And in additional to that to see the team getting used with the situation, defraying interest each release cycle, […]

"Friends with Benefits": By REST

All the time people are creating new Web applications and trying to push it on the market. Different ideas popup each few seconds. Some years ago most of the Web applications could live “alone”, I mean, without integration/interaction with others, but today? Can you imagine trying to create a “revolutionary”/cool Web application without a “friend […]

More things than just find an "solution"

During a daily talk through email, a friend of mine came to me with a question about an specific approach that he would like to adopt on a new application(information system type thing) that the company where he works will start soon. Basically the scenario is: They would like to build an UI in Delphi […]

Plugin Architecture: Reversed Effort

Are you considering plug-ins in your next application/tool? Plug-in architecture is not a new concept, probably as you know it comes from a long time, and there are a bundle of stable “solutions” to support that, such as “Eclipse/OSGi” and others. Indeed there are several good technical arguments that can catch the development team’s eyes […]

Where is the boundary between "Rapid" and Quick-and-Dirty on software development?

As you can see, it is a classical Dilbert comic strips, but that probably represent the truth in some spots, specially for those teams that just want to delivery the things, usually as a way to show up productivity. A potential problem here is when the “Quick-and-Dirty” approach becomes the rules and a lack of […]

Linux Support for ECF Skype Provider

Linux Support for ECF Skype Provider ECF (2.0.0 Milestone 4a – N&N ) has the addition of linux support for the Skype provider, now it is possible connect to Skype 2.0 beta under Eclipse on Linux. The Skype Provider implements the ECF Call API, the Presence API, and the Datashare API.

Inside source code crawler world: the evolution of searching, mining and retrieving of software components

Software developers are facing a big challenge in today's software development area, which is: developing and delivering, often complex, high quality running software using multiple object-oriented (OO) frameworks, libraries and application programming interfaces (APIs) in very short period of time. This large spectrum of possibilities results in a steep learning curve due to its huge amount of information – classes, methods, interfaces, relations, dependencies and so forth – to be assimilated. At the same time, companies are struggling to fit into their tight schedules while dealing with pressure to reach deadlines and deliver products to market earlier than its competitors. This creates a good opportunity to make use of mechanisms that might improve the quality and productivity of development by using existing software assets.

Fabiano Cruz and I decided to talk about the evolution of code search and how shareable is the source code through the world nowadays. We will split up this post in three or more essays. In this first essay, we are going to classify the evolution of source code search engines. So, please, feel free to drop us a line spreading your opinion about this topic.

It's an ordinary habit, developers ask for a source code sample to others colleagues. For instance: if you need to create a simple JTree, it's easier and faster to ask someone to send a sample code like Listing 1, instead of trying to learn from the API. Who has never asked for a code snippet to a friend?

DefaultMutableTreeNode root = new DefaultMutableTreeNode("Root"); 
DefaultMutableTreeNode child1 = new DefaultMutableTreeNode("Child 1");
root.add(child1);

DefaultMutableTreeNode child2 = new DefaultMutableTreeNode("Child 2");
root.add(child2);

JTree tree = new JTree(root);
someWindow.add(new JScrollPane(tree));

Listing 1 - JTree sample code snippet

We could roughly compare the Advanced Research Projects Agency Network (ARPANET) in the beginning of the WWW project without Google Search Engine with the Software Development area without a powerful mechanism to search, mine and retrieve source code samples. Indeed we hadn't 25 billion web pages to index as we did not have all those fancy languages and frameworks in the software development field as we have today.

So, there are several tools available to empower developers with an easy-to-use interface to search for source code snippets and share source code among other developers. For instance, developers can make their source code available to a lot of people through 'forge' repositories, blogs, articles and so on.

In order to classify the evolution of source code search engines, we could say that the First Generation of these tools were the popular Unix command: find (see Listing 2 below). The find command allows the Unix user to process a set of files and/or directories in a file subtree. It can be used to search certain code snippet in a project directory, providing support for coding by "copy-and-paste" or the "develop by example" practice, or you could even call it [reuse].

 
find . –name *.c -exec grep "hashtable" '{}' \; -print

Listing 2 - The find command

The basic idea is to run the find command on a directory in order to locate a set of files that match a shell pattern or a regular expression. Run 'grep' for each file and look for a certain pattern. Matched lines, including some context, are collected and displayed for visualization. The user may inspect an entire file if he/she wishes.

Analyzing the code search tools evolution, the Second Generation was the traditional search engines which are based on information retrieval technologies. There, the developers could find and retrieve source code samples. Nevertheless, these tools implement techniques such as text relevance and link analysis that can't be integrally applied to source code. So, while the developers are looking for a small source code piece, these tools show plain text as result. Clearly it doesn't apply different concepts and techniques for processing and searching for structured source code, determining the probable intent of a search, leaving to developers the result analysis job. Of course, this type of search has its drawbacks.

Some freshest code search tools (see Table 1 below), and here we could say Third Generation, appreciates the way of thinking to enrich the support and service which will definitely improve the search quality. Currently, the way to automatically retrieve such volumes of source code data is through the use of a focused Web crawler, which is a program that visits many repositories (public source code repositories like java.net and sourceforge) and scans their content, visiting and indexing more source code.

Tool
URL
Google Code Search http://www.google.com/codesearch
Koders http://www.koders.com
Ucodit http://www.ucodit.com/espviewer
Jsourcery http://jsourcery.com
Krugle http://www.krugle.com
merobase http://www.merobase.com

Table 1 – Source code search engines

There are others requirements to take in account while effectively digging for source code, like: Metadata terms, class name, imports, code block, comment block, method return and others. For instance, a Java source code provides a good way to get information to index and crawler, like anotations, code conventions (standard conventions that Sun follows and recommends - java.sun.com/docs/codeconv) and so forth. We can't forget about Fingerprints, which are an interesting feature provided by some Code Search tools. It denotes both the presence and multiplicity of specific programming constructs within individual code entities. Some code search tools make available several ways of fingerprint search, for example, Micro Patterns in Java Code, which provides information as to whether or not simple design patterns are present within a code entity.

The other special feature available in some code source search engine is a kind of binder, where developers can save search results and share them with others, providing benefits not available in traditional search engines. We can imagine an automatic Agent notifying you when additions are made to your binder.

In the last couple of years source code specialized search have been emerged. These tools support different programming languages, and you can filter by license, besides many others interesting features.

When you ask developers the more excited users of source code search engines what is the benefit of using such tools, they point out "reuse", and a lot of them are Java developers. Indeed, this is explained due to the fact that the Java community is accomplished through the process of sharing and reuse.

Another thing that has been emerging is tools for code search directly integrated into the development environment, providing contextually search capabilities where, depending on what the developer is suppose to do on that part of the code, these tools can query a sample repository for code snippets that are relevant to the programming task at hand. For instance, if you are writing a MD5 method within a class, these tools can infer queries from context, so there is no need for the programmer to write queries once the most structurally relevant snippets are returned to the developer and they can be navigated.

Undoubtedly, tools like that can help large organizations to go beyond reuse barriers, empowering them with a straightforward approach for searching, mining and retrieving software components and source code snippets across its, source code repositories (usually distributed). Also, many software developers can quickly find helpful and useful information for building even more high-scale and pluggable software components, keeping them focused on the problem domain at hand.

Last but not least. We hope you enjoyed the reading, and possibly learned a few things. You are also welcome and encouraged to drop your comments here. In the next post, we’re planning to go deeply in the Third Generation source code crawler world, comparing the existing tools, its strengths and weakness, particularly regarding the Java language.

Using Maven to remove Chinese Wall during offshore development

I came across several interesting communication glitches in an offshore development software. This project comprises analysis, design, architecture and QA teams who work onshore, while developer teams are working offshore. We faced problems such as communication bottlenecks and lack of awareness of what developers were physically producing. A big part of these problems we got solved through the implementation of a communication plan between the teams. This plan focused on face-to-face interactions and the use of tools like Wiki, IM, Skype, forum and desktop sharing systems.

Motivated by the fact that developers were not present in the same place as the analysis, design, architecture and QA teams, but these teams needed to interact with whatever the developer teams generated(source codes, unit tests, etc.), or in some situations they needed to be aware of situations which involved other group members, we decided to use Maven to provide awareness for these groups that were geographically scattered.

Using Maven site generation was a simple way to make available information on source code projects, where scattered teams could analyze developers’ work and become aware of who they were working with – thus using up-to-date information. Therefore teams of analysis and design, architecture and QA can evaluate source codes and developers tasks (e.g. Unit tests, code analysis, Test coverage) by utilizing the process designed for the project, in addition to aiding the developer teams to verify if they are following code-style, and adopted metrics and executing unit tests(for domain layers and persistence layers, for instance), defect tracking, requirements queue for implementation, Javadocs, and others.

Maven site generation easily provides a site with several project information. The secret consists in adding interesting plug-ins which will generate reports. Some of the plug-ins that I like and consider important to team communication are Javadocs, Unit test, Test coverage, FindBug, PMD/CPD, CheckStyle, Changelog, TaskList, JXR, Developer Activity, FAQ, StatCvs, Dependencies Graph, Metrics(JDepend and Javancss) and specific plug-ins that you can write.

Distance developer software is very challenging, mainly if the subject is awareness between groups. Whenever possible, you need to make source code information available as it is produced, and Maven is a simple way to do this and much more.

Teaching Java platform to undergraduate students is a process

Learning Java platform is a process. Some JUG Petropolis members and me have been teaching Java platform for five years free of charge in an academic environment. Our focus is to help somebody in a academic community, who wants to learn Java. I always describe a strategy to make sure that students can continue to learn Java effectively.

Here's How:

  • Remember that learning Java platform is a gradual process - it does not happen overnight.
  • Define your learning objectives early: What do you want to learn
    and why?
  • Make learning a habit. Try to learn something every day, and put it into practice. You need to type code!
  • Remember to make learning a habit! If you study each day for 30
    minutes Java will be constantly in your head. If you study once a week, Java will not be as present in your mind.
  • Choose your materials well. You can get tips of sites and materials with a JUG near you.
  • Find friends to study and practice with. Learning Java together can be very encouraging.
  • Move your finger! Practice coding what you are learning. It may
    seem strange, but it is very effective.
  • Be patient with yourself. Remember learning is a process - writing
    a code well takes time.
  • Communicate! There is nothing like communicating. Book exercises
    are good - having a JUG friend on the other side of the world who understands your code is fantastic!
  • Use the Internet. The Internet is the most exciting, unlimited
    Java resource that anyone could imagine and it is right at your finger tips.

Are software architects not working hand-in-hand with developers?

Some time ago, a project I participated as a team manager, called my attention. The development team was constantly complaining about the complexity they had to grapple with for implementing every single use case. When asked whether their difficulty lay in the technological aspects, or even if the core business involved was too complex, they reported that their problem was elsewhere. Namely the challenge actually had to do with the steps they had to follow in order to attain, for instance, a simple CRUD respecting the architecture designed by the software architect. A CRUD with just two attributes required the development of 10 different classes to allow the insertion of a simple record in the database.

Then I coincidentally came across a blog posted by Okke van ‘t Verlaat which got me thinking about our predicament.

Our J2EE based project demanded that the development of any CRUD follow a bundle of classes defined through several design patterns, as follows:

Diagram

The team made use of this set of classes in about 90% of the use cases they implement. This structure was designed using a Business Delegate, a Session Façade, Application Service(s), Data Transfer Object(s), EntityBean(Simple data Pojo) and Data Access Object. Moreover, it was always necessary to use factory embedded abstract classes for the DAO´s, Business Delegate and Application Service.

The diagram above shows HibernateDAO realizing Interfaces (DAO's). It is important to remind that Hibernate provides database independence for several vendors (in this project we need to be able to work with different databases). As a matter of fact, is it really necessary to create interface for DAO's?

This architecture represents a possible anemic domain model, and it was built from several design patterns that aren’t necessary. There aren't business objects with domain logic, which are responsible to keep track of the business rules. Instead of it, there are services objects that catch the domain problem.

This services objects are using business objects only to storage data. Is it "organized" as procedural design style?
They are keeping business operations into service objects in the project as a whole.

I agree that it is important to have a service layer that encapsulates the application logic, helping transaction and managing information of operations. Therefore, does this choice avoid the creation of core business objects?
With this architecture, a simple insert required the implementation of several classes. Isn’t it possible to throw some of these classes away?

Why the software architect don't think simple when designing the architecture (KISS - Keep it simple as possible)?

It is fundamental for software architect to provide a solution design that is truly viable and that can be successfully and rapidly constructed, implemented, operated and managed.

The fast way to win a JavaOne pass - Brazilian developers only

NetBeans, SouJava and GlobalCode launched the "Desafio" NetBeans in the last International Marathon 4 Java. Brazilian developers have a chance to win A JavaOne2006 pass, entitling the winner to a free roundtrip airfare to San Francisco with hotel accommodations included. You can participate as an individual or a team. If you choose to participate as a team and you win the first place you will be awarded 3 JavaOne2006 passes, namely 3 free roundtrip airfare to
San Francisco with hotel accommodations included.

To qualify, you must devise a novel and original plugin for NetBeans 5.0, good enough to be the chosen one. But it is not all that simple. Some rules apply. So go ahead and check them out at https://desafionetbeans.dev.java.net.

Here are some ideas to inspire you. But watch out cause these plugins may already exist:

  • plugin to support LOG4J
  • plugin to support Hibernate
  • plugin to support Spring
  • plugin to support UML
  • plugin to support Maven
  • plugin to support JXTA
  • etc....

You must submit your plugin idea ASAP(deadline is 12/20), because
this challenge promises to be great!

If you are a Brazilian developer please don't miss the boat. Face this challenge head on at https://desafionetbeans.dev.java.net.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License