Saturday, December 8, 2012

Thoughts on Spring Surf

Spring Surf is a View composition solution originally developed as part of Alfresco projects but which has recently been spun off into its own project (technlogies mentioned were previously s/Spring/Alfresco/).  I've been working with Surf on a project for the past 6 months and have grown familiar with many of its intricacies

What it is

Spring Surf  is probably best known for its use within Alfresco Share.  Surf provides a skeleton around Spring Webscripts which are effectively bundles of convention driven MVC functionality made to work within Alfresco.  Surf is largely notable for its very verbose XML based view definitions where the various components are customizable in multiple locations.  Additionally Surf creates a rich model which represents the assorted objects which play a role in the View and can be used while rendering for some view specific logic.

My Hopes

I adopted Spring Surf for a project because we use Alfresco and I was looking for a replacement solution that would keep stakeholders from desiring that some stale legacy code was carried forward.  The verbosity of Surf seemed to promise higher levels of flexibility, and combined with Alfresco that could allow for a greater amount of runtime control and publishing of a wide range of View updates through Alfresco Web forms.  The splitting off of Surf into a Spring project was promising as it indicated that Alfresco was interested in creating a richer ecosystem of libraries that were less coupled to their architecture.

DevCon

 I recently went to Alfresco DevCon 2012 primarily to get a feel for the future direction of the project so that any of my work would not diverge too strongly (particularly since most documentation is still focused on version 3 while 4 appeared to be a major shift).  Alfresco is understandably focused on enhancing its role as an ECM which co-exists with other solutions and in particular becoming the cloud compatible ECM.  Solutions which are not directly concerned with document management are increasingly being handled outside of Alfresco using CMIS and entirely different technologies (like Drupal).

My Realities(not necessarily anyone else's)

The biggest single hurdle in working with Surf was the lack of documentation.  Although it had been split off, the Spring pages were seemingly given no attention and there was effectively no information about using the framework in non-Alfresco contexts.  After DevCon it became painfully clear that this is due to Surf not being useful outside of Alfresco, particularly Alfresco Share.

Being focused on being an ECM Alfresco is in the role of allowing their solutions to be able to be worked with, but it is not their role to develop external solutions.  The splitting of Share into a more distinct client application was necessary to become more Cloud-y.  I can only suppose that shifting Webscripts and Surf to the Spring umbrella was done in the hopes that they would gain traction outside of Alfresco (which they haven't).  Alfresco provides a powerful platform for the repository and for the Share client, but on the client side the result is that you must be on top of the platform to reap the benefits.  The idea of further modularization into a library or loosely coupled framework is not on their horizon.

Surf is too cumbersome to compete with other similar solutions.  There is virtually no tooling to help with the creation of Surf files.  The one possible exception is the now seemingly abandoned Spring Roo plugin, which from my eyes is on the wrong end of the development tools: a possible benefit of having the view defined using Surf's approach would be to transfer greater power to a non-engineer (non Roo user).

The imagined flexibility didn't pan out as the model of the View is created entirely at start-up.  This makes sense and there are ways to work around it, but ultimately it does not provide any out of the box flexibility beyond any other view composition approaches.

Webscripts still have promise (and are used effectively in another department at my office), but unless they are communicating with Alfresco they could also be replaced by a far simpler and lighter solution.

Ultimately Spring Surf only makes sense as Alfresco Surf and used in Alfresco Share.  The entire structure makes perfect sense when viewed through the lens of a larger platform which allows for a consistent programming model when dealing with repository nodes and a View which represents the interface to that repository.  Outside of that context, however, the complexity it introduces provides no benefit over slight modifications of far, far simpler alternatives.

If you are building a solution that should sensibly be tied to the Alfresco platform, then using Surf to customize Share and optionally working with an Alfresco partner makes a lot of sense.  If you're looking for a more general purpose solution and may have even thought something like Spring == a solution that can help for a variety of problems: keep looking.

Transparent REST client code for your services using Spring 3 and RESTEasy

The advantages to creating REST services has been re-iterated enough to be skipped here.  I will quickly mention my personal most practical favorite is that it allows the I/O of the service to be visible enough that the endpoints can be easily debugged in isolation using curl.  One of the big dangers is that the simplicity of creating client code can lead to issues like duplication and inconsistency as on- off code is thrown together.  Ironically the simplicity that is the strength of REST can become its weakness as the need for structure is so dramatically reduced.  Most JAX-RS implementations (I think) provide support for a client package where you can be left with the best of both worlds: a simple REST service and a means to automatically generate
structured client code.

RESTEasy is being used because the target environment is running JBoss servers.  Alternative JAX-RS implementations such as Jersey and CXF should offer similar mechanisms, though they were not researched thoroughly.

Background

Basic Objectives

This solution assumes that both a service and a client package for that service are being developed.  There is nothing to prevent the client solution from being used by itself, but it would likely not be justifiable considering the relative complexity of the produced system and the associated dependencies.

The basic goals are as follows:
  • The service should be a simple, standard REST service
  • Java client could should be able to transparently access the service
  • The API information should be DRY between systems
  • It should be easier to code than alternatives

The simple client alternative

More often than not it seems as though REST clients are created directly using HttpClient or (in Spring) using the RestTemplate.  This a very simple and straightforward approach.  Additionally...it is quite likely the most viable alternative when accessing a third party service that does not provide a specific client package.  In particular this would make sense if the service is implemented in another language (and I would probably recommend using a language that allows for faster development unless there's a reason for Java.

Possible Issues

(All of these could be avoided of course)
  • The API information (including DTOs) is likely to be duplicated in the client and the server
  • Management of the connection and serialization concerns may need to be addressed (and therefore likely to be buggy)
  • The connection may be implemented outside of a properly defined layer and interface

RESTEasy Proxy Client in Spring

RESTEasy provides a proxy client that can be used to connect to a defined service interface.  The documentation demonstrates how it can be used, but (as it is framework agnostic) does not provide integration information.  Using Spring 3 programmatic configuration allows for seamless automatic creation of a service layer which can be used to access the remote REST API.  This keeps the definition of the service consolidated between client and server and therefor eases maintenance.  Additionally it allows the client system to call the proxy and be provided with an object without concern of where it is coming from, and allowing the RESTEasy code to properly handle the details.

Implementation

The Interface

The JAX-RS information should be provided in the form of an Interface if that is not normally practiced.  The JAX-RS Interface is what drives everything, it should be packaged in a location that is accessible to both the client and the server (as should any DTOs), within the client module would work as a default.  Simple example:

@Path(ExampleResource.PATH)
public interface ExampleResource {
  public static final String PATH = "/collection/";

  @GET
  @Path("/{id}")
  @Produces(Mediatype.APPLICATION_JSON)
  Example findOne(@PathParam("id") Long id);

}

The Server

The server should provide the endpoint that implements the created Interface.  This is standard JAX-RS behavior.  It could be done directly on top of an existing service, I personally like to keep it in a controller layer to ensure that it can remain resource oriented while the services may be more message oriented.

The Client (the actual significant part)

Now the fun way to bridge the gap left in the RESTEasy docs of how to tie those client proxies in to your system.  Using Spring 3 annotations for programmatic configuration you can add the following to a re-usable client module:

@Configuration
public class RestClientConfiguration {
  @Value("${restServiceRoot}")
  private String restServiceRoot;

  public ServiceBeanConfiguration() {
    RegisterBuiltin.register(
      ResteasyProviderFactory.getInstance());
  }

  @Bean
  public ExampleResource exampleResourceClientService() {
    return ProxyFactory.create(ExampleResource.class, restServiceRoot);
  }
}

This creates a new Spring bean named "exampleResourceClientService" (the name of the method or specified as a value for @Bean) which can then just be injected as needed (ideally using the Interface) and called like Example example = exampleResourceClientService.findOne(id);.  Additionally the above assumes you have a property named restServiceRoot which points to the server (such as "http://myservice.example.com/rest").

Conclusion

Following the above you should be able to create a client jar that can be included in any project and scanned to get an easily maintained client package for very little work.  Additional services can be created by adding new methods to the configuration class, and further customizations can be easily applied.  You end up with a nice simple REST API but with the kind of client proxy that is normally associated with heavier communication technologies.

Saturday, October 6, 2012

Why Constructor Injection Should Be Used

Spring and other DI frameworks normally support 2 types of injection, setter injection and constructor injection.  Since setter injection is the more flexible it has become the far more popular approach, but making better use of constructor injection makes code clearer and more robust.

Setter Injection and JavaBeans

Setter injection will always get you where you want to be.  It also has the advantage of being in line with the standard JavaBean approach of using a default nullary constructor and then setting what is needed.  To start, in the context of more modern languages the concept of the standard JavaBeans is laughable. The idea that code like:

Person me = new Person();
me.setFirstName("Matt");
me.setMood("Laughing");

is some kind of win for convention is absurd.  It is verbose, and a visual correspondence to an underlying data structure is lost.  It is preferable to telescopic constructors with ambiguous arguments, but it doesn't seem to offer enough benefit to justify potential problems (Groovy does offer some nice sugar that addresses this, but this is about Java).

What's the problem?

Nullary constructors lead to objects that are an inconsistent state if they have have required members.  Continuing the example above, suppose every person must have a first name.  When that object is first created, it does not have a first name until that setter is called.  This is almost certainly not an issue in this code since everything is wrapped up in the same stack frame and so nothing should be able to accidentally sneak in between those 2 statements.  But it does limit the use of that class into only being operated on in thread safe contexts such as above.  The class itself is not inherently thread safe.

The other possible issue is mutability.  First names can change, but suppose this Person is being stored in some database where it has a numeric primary key.  Mucking around with that id after the object exists could be a recipe for disaster, locking the  id would make everything a lot safer.

Spring Cleaning

The simplest example for this problem in a DI container is looking at a typical Spring managed beans.  In general most layered applications with persistence will consist of a service layer which interacts with the repository layer.  The code with setter injection (with annotations, and javax.inject since I like standards) could be something like (there may be typos since I'm IDE dependent but am not using one, and I'm omitting the @Transactional annotation(s)).

@Service
public class SomeServiceImpl implements SomeService() {
  
  private SomeRepository someRepository;

  @Inject //Or on the variable itself
  public void setSomeRepository(SomeRepository someRepository) {
   this.someRepository = someRepository;
  }

  @Override
  public void updateSomeData(DataStruct struct) {
   //Do some magic
   someRepository.storeData(struct);
  }
}

The possibility of "updateSomeData" being called when the object is in an inconsistent state is more apparent in this type of class where you'd expect calls from multiple clients.  But...Spring takes care of that for you by wiring all of your beans together on start-up.  But this does become an issue when the bean is used outside of the context of Spring.  One of the pursuits of frameworks like Spring is to allow your code to remain decoupled from the underlying framework, but the code above is operating under a potentially fatal presumption.

Constructor Version

@Service
@Transactional
public class SomeServiceImpl implements SomeService() {
  
  private final SomeRepository someRepository;

  @Inject //Or on the variable itself
  public SomeServiceImpl(SomeRepository someRepository) {
    if (someRepository == null) throw new NullPointerException();
    this.someRepository = someRepository;
  }

  @Override
  public void updateSomeData(DataStruct struct) {
   //Do some magic
   someRepository.storeData(struct);
  }
}

As you can see there is no substantial difference in code length, but this class is far stronger.  Spring operates in virtually the same way, but the annotation is moved to the new constructor (the xml version does add a slight amount of extra knowledge but still less than 10 minutes to process).  Objects of this class cannot be in an inconsistent state, its dependencies are checked and locked before the object is made available (throwing the NPE in the style recommended in Effective Java by Joshua Bloch, but other approaches could work).

An additional benefit is that the code has a clearer intent.  By defining the key dependencies of a class as constructor dependencies (and as final where relevant), the class becomes more self-describing: "These are the pieces that I am useless without and will blow up if I don't have them".  This can then be augmented by setter injection for those other managed beans which are looser, less essential dependencies (such as Strategies that may be used in 1 or 2 methods, or a service which will by bypassed if not available).  

Conclusion

Constructors provide a powerful means to ensure that your objects are in a consistent state throughout their life and can help minimize needless mutability and their exposed API.  A possible downside of Spring (and other DI containers) is that the easy and consistent setter usage  can discourage the use of constructors.  Proper use of constructors in place of setters leads to more resilient and more intentional code.  

Sunday, September 23, 2012

Using Spring Form Binding When the View Resolver Doesn't Support It (Take 2)

So in an earlier post (what is now Take 1), I covered how to expose Spring form binding to pass through everything that is needed in the event that your View resolver of choice doesn't support it (more information is available in that post).  At the time I used a handler interceptor to move the logic outside of the View resolver.   This introduced the small annoyance that there was a check on each request to see whether this was the right kind of view.  Additionally as discovered later, this also presents problems elsewhere...so time to push it back into the view resolver where it belongs.

Broken Exceptions

What caught up with me using that approach was using Spring exception handling.  Adding an exception handler to a controller, for instance, can prevent you from having to worry about system errors in addition to expected error conditions.  You obviously shouldn't be doing much with Spring form binding in your exception handling, but it's nice to be able to do something like throw the user back out to the form they were using (or another form for that matter) with an appropriate message.  For instance I was working on a form that was processed locally before communicating with a remote service.  In the case of most errors I wanted to give the user a chance to try again, but I didn't want to pollute the local code with _all_ of the remote concerns.

Into the View Resolver

But where?  Like before I wanted to keep this piece decently modularized and Spring didn't provide much to help.  After fishing through the source code for a bit the best place seemed to be to intercept the call to render() on the View interface.

 public void render(Map<String, ?> model, HttpServletRequest request,
   HttpServletResponse response) throws Exception {

This can be modified using a Decorative wrapper:

public class RequestContextViewDecorator implements View {

 private final View innerView;
 private final ApplicationContext applicationContext;
 
 public RequestContextViewDecorator(View innerView, ApplicationContext applicationContext) {
  this.innerView = innerView;
  this.applicationContext = applicationContext;
 }

The first issue is getting around that pesky wildcard capture "?" in the model. This is easily done by a small method to get back a known safe type:

 private Map<String, Object> getTypedMap(Map<String, ?> model) {
  Map%lt;String, Object> typedMap = new HashMap%lt;String, Object>();
  typedMap.putAll(model);
  return typedMap;
 }

This could easily be refactored in to an abstract class and then use a Template abstract method which receives the typed map if you have other classes doing similar things or like to add extra classes to keep things focused.  This object then inherits the same behavior covered from the first post, throwing what it needs in the model and then delegating to the wrapped View:

        @Override
 public void render(Map%lt;String, ?> untypedModel, HttpServletRequest request,
   HttpServletResponse response) throws Exception {

  Map model = getTypedMap(untypedModel);
   
  if (exposeSpringMacroHelpers) {
   if (!model.containsKey(MODEL_KEY)) {
    model.put(MODEL_KEY, new RequestContext(request, response, ((WebApplicationContext) applicationContext).getServletContext(), model));
   }
  }
....
        innerView.render(model, request, response);
     }


Now just plug in to to your view resolver (in this case the Surf(again) view resolver):


public class RequestContextPageViewResolver extends PageViewResolver {
 
 @Override
 protected View loadView(String viewName, Locale locale) throws Exception {
  return new RequestContextViewDecorator(super.loadView(viewName, locale), this.getApplicationContext());
 }
}


And there you have it (after you wire in your View resolver of choice in your Spring config): a nice OOP way of integrating the request context needed for Spring form binding in a modular way to a View resolver which for some reason or another does not have the functionality in its inheritance hierarchy.

Sunday, August 26, 2012

CSS Recipe for Making Elements Fill Their Container Height

A common desire when designing Web sites is to have a columnar layout, with something along the lines of a sidebar that is visually distinct from the main content, but which fully consumes the space.  This can be surprisingly difficult to implement consistently, and leaves many designers reaching for a background image on the container, even if only solid colors are required.  A more flexible solution is available using CSS and HTML alone (not even JavaScript is required)

Quick Disclaimer

Honestly my single biggest reason for this workaround is that I loathe having to open up a graphics editor for something like this, particularly to only perform a slight adjustment   I'm also neurotic enough that I want the static resources associated with a site to be truly relevant, so that even one file that is conceptually redundant irks me.  This workaround isn't pristine as it throws an element in the HTML which is solely for design purposes, though at this point I'm also increasingly viewing the DOM as the optimal place to pollute in little ways in the interest of keeping other more complicated aspects simple and organized. This solution is also limited to some scenarios.

Also as a quick note I'm writing this in HTML but I'm not doing it in any way orderly since I'm just typing into blogger, so inspect the elements rather than reading the source.

The Problem

Pretend for a moment that this container is a full document body as displayed in a browser
And you want to put another container in it, we'll use a sidebar since it's relevant:
Sidebar Content
But that's no good, so we set the height of the sidebar to be the full height of the container using height:100% and making sure the container is position:relative:
Sidebar Content

Looks good...until

Sidebar Content that is really long and no longer fits in the size that the container was originally assigned to and ends up spilling out of the edge
Now here's where the imagination kicks in, that's broken because the content jumps out of the background (and you can pretend you'd scroll down to view that area that is outside of the container). But not to worry...CSS has you covered, change the height:100% to min-height:100%...then the container will always be at least the full height but will expand to hold its contents.
Sidebar Content that is really long and no longer fits in the size that the container was originally assigned to and ends up spilling out of the edge
Problem solved, and you picked up a new CSS trick. More realistically though the main container will also be sized by percentage so it can scale to fit the visitor's screen (otherwise that's another issue), so we'll add some content to that area and change the height to a relative one.
Sidebar Content
Main content with all sorts of interesting facts that are normally longer than the sidebar content.
Argh...back to square one.

The Solution

So I went back and forth with this issue for a little bit like probably most people have that tried to solve it with CSS alone. Every solution seemed to break when either the sidebar or the content area was larger, or when things happened like the page scrolling. Finally one day I had the forehead smacking realization (and this is where it gets slightly kludgy) why not just do both? Not ideal, but for me still preferable to a background image. When we last left the code it was left in a state where the sidebar would grow to hold the contents, but would not consistently fill the content area. This is more relevant from the DOM perspective as it flows properly, so we can leave that one alone...but then add a second element behind that one that consistently fills the container. For that you need to use an alternate positioning type: the often maligned "absolute" positioning, and in this case combined with constraint based positioning (alternate sizing options would also work). Adding a sibling element beneath the sidebar on the z-index within the same position:relative parent element...with absolute positioning, the same width as the sidebar, and a 0 value for in this case, top, right, and bottom to have it fill up the entire right side.
Sidebar Content
Main content with all sorts of interesting facts that are normally longer than the sidebar content.
So that now checks out...now the other way around:
Sidebar Content that is really long and no longer fits in the size that the container was originally assigned to and ends up spilling out of the edge
Main content with all sorts of interesting facts that are normally longer than the sidebar content.

(Again pretending you would scroll down to see that part overlapping the "window"). And...perfect(ish). So long as your DOM is in decent shape this works like a charm and has been put through the paces on several sites and seems to be quirk-proof in major browsers. This could also be handled by JavaScript on page load. Overall a slightly more complicated solution than a background image and may not be for everyone...but it is likely more maintainable and is also arguably more conceptually accurate than attaching the primary visual representation of one element to a resource associated with another element.

Quick Amendment

I realized after I wrote this that it is missing another piece: that container should expand to hold the sidebar if needed now that it has a relative height. Since the sidebar is presently floated it's not expanding the container. This can be solved by the typical clearfix workaround.

Sidebar Content that is really long and no longer fits in the size that the container was originally assigned to and ends up spilling out of the edge
Main content with all sorts of interesting facts that are normally longer than the sidebar content.

Saturday, August 18, 2012

JavaScript: More than a Scripting Language?

With the help of HTML5 and modern Web development, JavaScript is finally getting regarded as a real programming language.  But is JavaScript really equipped to handle all of its new responsibilities?

Background

JavaScript/ECMAScript has spent most of its life being written by people who learned just enough of it to create simple scripts, and those scripts were then copied and pasted by those who normally didn't know more than how to set variables in those scripts.  Thanks to things like AJAX, jQuery, and modern Web browsers JavaScript is now more powerful, easier to work with, and an essential part of any modern Web application.

There is a relatively new trend of powerful JavaScript libraries and frameworks to create complete applications in JavaScript and allow for MVC style development in the client.  Node.js has even moved this to the server side.

Why JavaScript Rocks

Node.js is a particularly interesting case because JavaScript was chosen not out of a desire to move JavaScript to the server, but because JavaScript had the qualities that were desired: most notably painless support for asynchronous programming.  To be honest, I've been meaning to tinker with Node for a while now, but it keeps getting preempted on my list of technologies to explore.

JavaScript has a lightweight syntax and has evolved to handle asynchronous evented programming better than most languages.  Its dynamic typing, prototypal inheritance, and first class functions can allow for rapid implementation of complex functionality.  JavaScript almost certainly allows for some of the most rapid development of any popular language available (from a language perspective, the platform as a whole is still relatively sparse).

JavaScript is dynamite!...

It can destroy your requirements faster than most anything else...but it can also take your foot along with it.  The most maligned feature of JavaScript is the global namespace.  Rather than go into the features individually, I'll instead use the global namespace as a symbol for the limitations of JavaScript: it allows for faster, worry-free development at the cost of structure.  JavaScript was, as the name suggests, created to be a scripting language written in relatively short snippets to glue other pieces together within a host environment.  The prototypal inheritance and general dynamism further support this: a script could be written to evolve as it executes rather than being designed before hand.  The immense productivity provided comes with an immense danger of writing tangled, co-incidental code particularly if you mix in some temporal concerns due to asynchronicity.

Should you use JavaScript

Whether to use JavaScript can be reduced to the structure vs flexibility debate that flickers around dynamic vs static languages.  From my perspective that question leads to a human factor.  Structure is always needed for a maintainable system and so the variable factor becomes how much of that structure is provided by the language (and supported by tools) vs how much is required to be maintained by developer discipline.  JavaScript is a particularly dynamic language, and therefore places particular onus on the developers to code in away that allows for the application to be maintainable.  As a project and its associated team size grows larger this is likely to become increasingly difficult.  

JavaScript is a great language for small pieces of functionality.  It also provides an attractive and viable option to produce small applications.  As the size of the application grows, however, the dynamism is likely to get increasingly difficult to manage.  JavaScript is therefore best used in manageable chunks within a larger infrastructure.  A single piece of functionality or the analog of the functionality provided by a small mobile app, essentially a single namespace/package, would be a good constraint for the extent of a JavaScript library's reach.   In that context JavaScript is very good at what it does and it should certainly be used (or CoffeeScript).  

The Future

One of the elephants in the room still remains things like prototypal inheritance and first class functions: those JavaScript features that are alien for most developers.  This has led to many people trying to jam JavaScript to behave more conventionally.  Prototypal inheritance is not something that has any momentum, and as much as I hope for wider adoption of first class functions and techniques like high order functions, I think much of that is too abstract to be as digestible as something like object orientation.  The present JavaScript space is a very exciting one and it will provide a lot of useful fodder for the future.  There seem to be fundamental issues with JavaScript, however, and whether it is able to evolve thoroughly and quickly enough to increase its reach beyond small packages and reach a wider audience of developers is still uncertain.  Most importantly the primary focus of JavaScript must remain to continue to serve its present function as well as it does now, which may always be at odds with extending its role.  

Sunday, August 12, 2012

Using Spring Form Binding When the View Resolver Doesn't Support It

Spring form binding is a convenient way to get valid data objects from a user in Spring MVC. If you need to use a View technology other than JSP, however, things may not just work, so here's some information that may fill in the gaps.

The situation

The particular situation I encountered involves using Freemarker for a template language.  Above and beyond just Freemarker I'm also using Spring Surf, so the standard solution (covered below) doesn't apply.  This post covers a direct usable solution that works in a technology agnostic way (aside from Spring) and should at least provide information for other solutions.  

Standard solution and what's going on

The standard means of setting up Freemarker (and other View technologies) for Spring form binding is to add some settings to the view resolver configuration.  Something along the lines of:

<bean id="viewResolver" class="org.springframework.web.servlet.view.freemarker.FreeMarkerViewResolver">
   <property name="exposeSpringMacroHelpers"><value>true</value></property>
   <property name="exposeRequestAttributes"><value>true</value></property>
   <property name="exposeSessionAttributes"><value>true</value></property>
</bean>

where the first property does the needed set-up for form binding (the other two merge request and session info into the model).  Tracking this down through the source code this setting is ultimately passed from the view resolver to the AbstractTemplateView (source) which adds a RequestContext to the model (and does the request & session merging).

Re-using that behavior

Unfortunately this is normally re-used through inheritance, and in the case of Spring Surf neither the relevant view resolver nor the AbstractTemplateView class are in the used hierarchy.  It could also be argued that this functionality shouldn't really be handled by the view resolver at all since it is more of an application concern, though I'd see both sides of the possible argument having pretty even weight.  I'd certainly argue that one way or the other it should be made more modular: for speed I resorted to the cut and paste route.

A sensible place to get the Model set up as needed for the View to hook in to it would be right in between when the Controller is done doing it's work and before the View resolver does its resolving.  In Spring this can be done with the postHandle hook of a HandlerInterceptor.  For consistency I've borrowed the same properties/flags as the AbstractTemplateView.  An additional caveat due to being moved before the View resolver is that every request will be intercepted, even those that aren't relevant and possibly don't have a Model such as those handled by a MessageConverter.  An additional null check takes care of that.

A sample interceptor would then be something like (season to taste):

public class RequestContextInterceptor extends HandlerInterceptorAdapter implements ApplicationContextAware {
  private ApplicationContext applicationContext;

  private boolean exposeSpringMacroHelpers = true;
  private boolean exposeRequestAttributes = true;
  private boolean exposeSessionAttribute = true;

  public static final String MODEL_KEY = "springMacroRequestContext";

  public void setExposeSpringMacroHelpers(boolean exposeSpringMacroHelpers) {
    this.exposeSpringMacroHelpers = exposeSpringMacroHelpers;
  }
//...Other setters

  @Override
  public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {

  //When using mesage converters or other non model requests
  if (modelAndView == null) return;

  if (exposeSpringMacroHelpers) {
    if (!modelAndView.getModel().containsKey(MODEL_KEY)) {
     //Throw together a usable RequestContext...seems to require ApplicationContextAware-ness
      modelAndView.addObject(MODEL_KEY, new RequestContext(request, response,((WebApplicationContext) applicationContext).getServletContext(), modelAndView.getModel()));
    }
  }

  if (exposeRequestAttributes) {
    //...Code stolen from AbstractTemplateView
  }
  //..Session code stolen from AbstractTemplate View
}

  @Override
  public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
    this.applicationContext = applicationContext;
  }
}

This can then be wired in to Spring:

    <mvc:interceptors>        
        <bean class="com.example.handlerinterceptors.RequestContextInterceptor">
          <property name="exposeSpringMacroHelpers" value="true"/>
          <property name="exposeRequestAttributes" value="true"/>
        </bean>
    </mvc:interceptors>

To avoid conflicts, disable the settings in the view resolver configuration.

For Freemarker there is also a form binding library that is normally automatically exposed for use.  Rather than muck around with getting that working and also because I like to be able to easily reference the source for that file, I opted to just download and use the file as a normal Freemarker import.

And there you have it: guidelines for a usable solution that is more portable than the out-of-box offering or at least some guidance that may help lead whee you need to go.  


Saturday, August 11, 2012

Bouncing Google Play apps onto a Kindle Fire

I'm going to add to the long list of Internet articles that describe installing apps from Google Play (f.k.a. Google Market) onto a Kindle Fire with my adopted approach (which requires another Android device).  This is nothing too interesting but does the trick the minimal effort.

Kindle Fire Notes

I got my Kindle Fire as a gift this past Christmas.  It is a nice little tablet that comes loaded with a version of Android which has been customized by Amazon and does not have the Google framework or applications, and installation of these are not officially supported.  If you are considering purchasing one and stumbled across this post to weigh your options: from my perspective the Fire is a good choice if you're particularly interested in the Amazon-centric offerings (obviously), but you'd be better off with one of the other options for an all-around product (particularly with some of the new offerings in the same price range).  I personally don't use the Fire for much more than reading and at some point I may also tinker with the Amazon flavored Android SDK so Fire fits the bill for me.

Rooting (not required)

You can fairly easily get root access to a Fire and install the Google Framework and Play and any other Android software, there are plenty of sites with instructions.  You could also wipe the device and install a more standard Android distribution.  If you're just looking to install some apps though, it's easier to just install those apps using sideload-style direct installation (my Fire is rooted but Play didn't work immediately and I haven't had a need to spend any time fixing it).

Instructions (the significant part)

Installing most apps is as simple as running the apk package on the device.  The big obstacle is that most apks are only served through stores, so the trick is to get them positioned where the Fire can grab them without a cumbersome process.  I like to keep my devices as self-sufficient as possible and on the day I was looking to install software I was far too lazy to take the walk to my car to retrieve the needed cable to connect my Fire to my computer, so this is also a PC-less method with no cables required.

Step 1. Install app on Android device (on the source device)

Self-explanatory, I have an Android phone with Play install so install the app as you normal would.

Step 2. Stage the downloaded apk in an accessible location (source device)

Getting to the .apk

First you need a way to access the apk and then track it down.  Like most other sites I'll recommend ES File Explorer for a file manager.  I started to poke around to find the file system path to the app I was looking for, but with ES you don't even need to that.  After it is opened bring up the menu and goto Manager->App Manager which will list the installed Apps.  Backup the app you want to bounce to the Kindle and keep track of the directory where the backup will be stored.

Staging the .apk

A simple way to get the package on to the Fire is using a cloud drive/backup type of solution.  Dropbox is a nice widely supported one, which also has an option to download its apk directly so it can be easily installed on to a Kindle Fire with no fuss.  Install Dropbox on to both devices.  Navigate to the directory where you created the backup on the source device in ES File Explorer and bring up the context menu (tap and hold) on the file you want to transfer.  Select "Share" and then "Dropbox" to upload the file to your Dropbox account.

Step 3. Install the apk on the Kindle Fire

You can now just navigate to the file using Dropbox on the Kindle Fire and run the apk to install the application.  So long as the application is compatible (and dependencies are met) there should be no issues and the app is ready for action.

Conclusion

This is yet another recipe for installing apps not available through the Amazon Appstore onto a Kindle Fire.  Very straightforward but could be useful for anyone, like me, who doesn't normally plug their Android devices into their computers.  Another big advantage to the cable-less approach is that this can be easily done asynchronously: if there is an app you'll want to install it can be staged at any time using the source device, and then installed on the Fire any time thereafter.  This is particularly relevant since without Play itself installed on the Fire updates will require being bounced also.  Hopefully this can be of assistance to anyone who is looking for those handful of apps for their Fire that just aren't available on the Amazon Appstore.


Saturday, July 7, 2012

Why Becoming a Casually Practicing Emacs-er is a Good Religious Choice

If you haven't already been indoctrinated to a text editor religion, joining the emacs flock has its perks.  I'm not saying you should go the strict devotional route, but being able to recite the basic incantations enough to blend in during the occasional service pays off.

What about that other religion?

That heading will more or less the end of the religion metaphor (even thought it is a heading).   In the *nix world there are two old families of text editors which have "religious" followings: vi & emacs.  From my perspective vi is far more "Unix-y"; it's minimal, powerful, modular...like a well written C program (like Unix itself).  Emacs on the other hand is overtly lisp-y, with an almost fluid sense of indirection and dynamism, making it a misfit in the *nix world (at least from what I've seen,even though Guile is a "standard" I've only run across it in GnuCash).  There is also a related performance difference in that vi programs are normally lighter weight, but that difference is inconsequential on modern computers.

I made the jump from vi (Vim) to Emacs (GNU Emacs) a couple years ago since I found Emacs more conducive to self-discovery with the combination of descriptive commands and powerful info system.  But...if you want to become a devotee and actual user of either then that is entirely a matter of preference and I'm not saying one is better than the other by itself.  vi has a fatal flaw in regards to the portability of knowledge, however, in that it has the concept of modes (so for instance to edit content you would enter an edit mode and then to move the cursor you would switch back to command mode and use what are often the same keys).  This concept doesn't map to most newer editors.

What is Emacs good for?

Dedicated Emacs users would answer that question with "anything".  Although I don't disagree with that sentiment, it can be difficult to convince others that Emacs can do anything that Tool-X can do with finer control after you spend the time to configure it and get comfortable with the bindings.  Even being able to customize the colors of the text display is often not enough to win them over.  So for every one else:

Key bindings

Continuing from the vi comparison, Emacs doesn't use modes in that way and uses key combinations with the modifier keys (e.g. Ctrl-s...represented in Emacs as C-s) to perform commands, just like most modern editors.  Emacs is a little binding happy though and has what seems to be an obscene amount of key bindings and uses combinations/sequences to increase the number of possible combinations and contextualize the functionality.  The payoff though is that Emacs key bindings are supported in all sorts of unexpected places.  Most IDEs have Emacs binding support, in addition to the console and shell utilities.  So once you learn them and they become natural, you can use them most anywhere.  This was particularly brought to my attention not long ago when one of my co-workers was curious about how I was doing things in an OS X terminal.

For simple"ish" tasks

A simple description of Emacs would be that it is a text editor.  Coming from a GUI background Emacs will seem odd and there will be a natural tendency to use a standard GUI editor or something simpler like nano on the command line.  Emacs offer a powerful environment that can be expected to be present in *nix  (and OS X) systems and are easily made available elsewhere.  That means that you always have access to an integrated editor for doing things like editing files, performing file system operations, and running shell commands.  This is complete with switching and splitting the screens and having powerful syntax aware file editing from both the GUI and the command line (which also translates to remotely over a shell connection).

This whole section applies equally to vi, but as mentioned above the key bindings and mode paradigm aren't portable from vi.  

How to join the cult of Emacs

So by now you're surely convinced that Emacs is worth an nth look and you're ready to get your feet wet.  Taking the built in Emacs tutorials to get used to the basic editor commands would be the standard first step. The program offers great built in help and command completion; as long as you become familiar with these you don't have to remember really anything else.  The learning overhead can therefore be reduced to getting familiar with Emacs's built in Info reader, and its lingo.  Emacs uses different terms then you may be used to, so if you know those terms its usually easy to figure out what you're trying to do. Learning some of the help commands and leaning on M-x can put everything within easy reach.

The next step is to just use Emacs for little things.  Use it as a simple text editor (check out org-mode), get comfortable with splitting the window and switching between buffers and then check out extensions like dired and eshell.

Conclusion

Sooner or later everyone needs to do something that falls in between the gaps of simple file editors and richer environments.  The usefulness of Emacs grows alongside your familiarity it.  Using it for simple tasks now, and then incrementally discovering its features will provide you with a powerful platform for more complex processes.  As the use of Emacs key bindings become automatic the benefit can transcend the program itself due to the near ubiquitous support for Emacs key bindings.