The developer's role has significantly changed over the years, no more so than with the adoption of Agile. Not long ago, my role meant focusing on expanding an application's functionality using just a handful of tools. It was someone else's job to worry about the architecture, delivery and testing. But that has changed. The shift to Agile means that roles and responsibilities are shared. Add that to the ever-increasing number of development tools, and it becomes extremely difficult, if not impossible, to know everything you need to know to develop a quality product. That's where communication and collaboration come in, not just within the team but between teams and across departments.
The modern developer's role is no longer an isolated one. He/she works as part of a team and it is the team's responsibility to develop an efficient and effective solution. Common Agile practices like pair-programming and peer reviews are invaluable for improving a solution during development. A good Agile team has a varied amount of experience covering different skillsets, and an effective team works together to make full use of that knowledge and experience in everything they do.
Communication between teams can be just as important. Conferences such as JAX London, along with various websites, are an ideal source of information from leading experts, providing recommendations on current practices and introductions to the latest tools. But not many companies can afford to send all their developers to each conference. Similarly, not every developer will read the same websites, so setting up communities of practice, and regular 'brown bag' sessions, enable knowledge to be shared across teams.
As well as communication between developers, there is also a need to interact with other departments. Regular discussions with business representatives, whether they be on a daily basis, or as part of a sprint review, are essential in developing the right functionality and producing a solution with which everyone is happy. Coding should also take account of the platform on which the application will run, which means talking to system administrators to gain a good understanding of the production environment.
In summary, the world of development is constantly changing and the developer's role is no different. With more and more skills to learn, more and more tools to understand and more and more practices to adopt, communication and collaboration with colleagues throughout the Company is critical for producing high-quality software that meets the requirements.
One of the key themes at the JAX Conference this year was the idea of continuous delivery, in particular how this can be achieved with 'containerised' applications, where we deliver container images that contain our applications pre-configured ready to be run, rather than the traditional approach of delivering our application and then having to install and configure it directly on existing systems. At Aquila Heywood, the pathfinder teams from our Agile transformation created new environments and pipelines, in order to help us build our product faster. These were built from containerised versions of our applications for internal testing and integration between teams. Applying the power of Jenkins multi-branch pipelines, we can automatically create complete development and testing environments for each new Git branch of our products. The next logical step is to investigate the idea of integrating these images into a full delivery pipeline. There were numerous interesting considerations raised around this idea during the conference, as detailed in the following sections.
During the writing of Docker files (scripts for the automated creation of images) comes new challenges for developers; What operating system should I use? What are the benefits of certain Linux distributions over others? How are the containers going to talk to each other? What command-line tools do they have and how do I use them? Previously these sorts of problems and decisions would be handled by the Ops team or system administrators, but they are now made when building sprint deliverables by our Scrum teams. This reinforces the need for good DevOps collaboration to ensure knowledge transfer and make sure teams have the resources required to see their stories to completion (including delivery).
Whereas, previously, our build pipelines might have passed along JAR files as our 'single binary' for delivery, this role is now performed by container images. This presents new challenges in terms of storage and management. Docker repositories (such as Sonatype Nexus or a private Docker registry) are needed instead of Maven repositories for our build artifacts. New metadata is also needed to control which image is to be used at the different stages of our delivery pipeline. For example, careful attention must be paid to image labelling to ensure deployments are built from the correct artifacts and that changes destined for deployment are not missed. Simply using the 'latest' or default tag version of an image does not mean necessarily mean that the newest version of the image will be used: it just means that the last build of that image, with no tag applied, is used.
Additionally, in order to ensure consistent application behaviour between development and deployment (and therefore reduce potential bugs), developers need to be developing against local Docker environments. They need to be sure that they are building and running containers that are identical to those that are to be deployed. No separate 'test' images are permitted that can drift in configuration so that they no longer represent what is being delivered. When the environments are aligned, we can be confident in the images delivered and spot any potential issues with particular combinations of container images and application features as quickly and cheaply as possible.
In order to ensure that our new deliverables do not contain any security vulnerabilities, we need to be sure that we know what our container images, and the base images on which they are built, contain. For images with multiple layers, this can prove difficult and time-consuming. Tools such as CoreOS Clair can prove useful in identifying any potential risks early in the development pipeline. This allows them to be quickly remedied without disruption to our customers.
The choice of Linux distributions for our images is also important. More traditional distributions may come with a large selection of tools, all increasing the size of the attack surface of our deployments. There are now distributions such as Alpine Linux that are specifically designed with containers in mind, not only for performance reasons, but also for their smaller attack surfaces and extra security considerations.
With these in mind, we can start to build delivery pipelines that not only allow us to deliver new features to our customers more quickly with less configuration required, but also help us gather quicker feedback from our customers and enable us to build more relevant products.