Category Archives: Software Engineering

Mixing release support and new development

The first thing that comes to one’s mind is “don’t do it!” And frankly, it’s far from best practice.

But my recent project experience is exactly the opposite. With known-not-confirmed feature requests for the next version from the client, our dev team was split into two, because:

  1. the initial release had just occurred, and we had a variety of technical debt mounted.
  2. we needed to estimate roughly how long it would take for us to vertically implement the next release.

In not exactly in the below order, what are the steps we usually go through?

release operation check

pop support requests to fix

resolve issues

qa the fix

release or reopen

discuss new features

estimate roughly, in vertical epics

find ways to componentise at epic level (emergent architecture)

link data-flows between components

define interfaces

prototype interface integrations

work on positive workflows using automated/manual tests

slice epics into stories

resolve any interface or dataflow feature discrepancies

find ways to not reinvent complex wheels

implement as quickly as possible

plan load testing

acceptance testing

risk identify & plan

release and support

Advertisements

Re-defining Software Architecture

The management made the call. Instead of continuing to work on the massive, RIA plug-in-based application, the multi-year project will now aim to deliver smaller/faster release cycles.

I rather welcome the decision. The previous project (a.k.a FRED) cost so much, in both infrastructure and ongoing development, it actually cost the VP his job in the company. (I won’t go into the details of that). The change actually fits Scrum well. Looking back, the FRED project felt like a BDUF. The design/architecture came from a third-party partner, it had modules that were logically separated-out, but it was completely based on a proprietary technology, instead of standards. Such approach was probably the result of a proper explanation of needs. So after more than a year of implementation, we were left with a brain-child product that did not fit the needs of the business.

Sound like any other requirements mismatch episode? I’m not denying it – in fact I accept that. The only thing that matters now is taking the heap and turning it into gold.

The good news is that, in FRED, we have a benchmark to compare ourselves to. The better news? We just released the first Vertical Slice Component – and it took less than half the budget of FRED, in half the time, to deliver a working product that performs well.

Turning the data we have into an asset and a facilitator of new insights into trends and patterns is the goal we have now, not just matching a spec that may “turn out not to be modular afterall”.

Release 2 is in the works. Sprint 19 comes next.

Make it work, at what cost?

Our team has just ‘finished’ the initial release phase of a multi-year project. We acknowledged each others’ effort in a round of handshakes and smiles, knowing we put in an unusual amount of effort within the past week.

Still, the project did not deliver on all of its goals. Regardless of the circumstances, it left a bittersweetness from not seeing the full implementation in action. No one wanted an incomplete deliverable, yet it happened.

It’s time to chew on what went well and what went wrong – to learn from the experience and to also evaluate the points from process and cost perspectives.

What went well:

  1. Cohesive teamwork
  2. Focus on the deliverables
  3. Selecting appropriate technologies to use
  4. Willingness to go the extra mile
  5. Getting the needed support from management

Yet there are also what went wrong:

  1. ‘Pockets’ of project information, where only a few people know the critical data/logic
  2. Insufficient time to completion
  3. Getting critical data at the last minute
  4. Changes in solution approach, late in the project

Commitment from development team members definitely shined through the entire process, no one doubts that.

Process and Cost Analysis

One approach is to see how much technical and other debts we have on the project, then estimate the amount of effort required to reduce the debts.

Technical Debt

I am finding that there is quite a bit of technical debt on multiple fronts:

  • Bugs in the UI Event-handler process(es), that are contributing to data interchanges at unintended points in the process. (May have to do with various IoC done using Unity Application Block).
  • Instead of minimal message package, the service-consuming application is invoking a service with an entire object, serialized in XML (already too verbose).
  • Data persisted with MongoDB also houses unnecessary data.
  • Data retrieval takes too long.
  • Logic applied in retrieving data is not finalized.

Documentation Debt

  • Reasons behind some of the mid-project changes in solution approach
  • Service APIs are not ReSTful yet, and API documentation is lacking
  • UI/Event + Unity process documentation is lacking

All this debt, on both technical and documentation aspects – it makes me wonder how the project was deemed successful by project management.

Among the documentation debts, not knowing the reasons behind some of the solution approach changes is a very cumbersome item. While not having documentation can always be a cumbersome item, this is in regards to the real ‘reasons’ behind the changes – therefore this situation is usually more problematic than something like a missing API documentation, for example. Developers, in general, do not have to concern themselves over the business reasons most of the time, but project management definitely does. If the reasons were technical in nature, or involved technical input, then the developers should also be aware of the reasons, to a reasonable extent.

Maintenance within a Single Scrum Team

During the past month, a relatively new form of resource management has taken afoot at within our team.

Two separate factors contributed to this shift of developer resources:

1. Budget Constraints

2. Compliance Requirements

The first, budget, came from the management’s decision not to extend most of consultancy contracts – all of which were already overdrawn, for a specific product. With new product offering in the company portfolio for next year, the in-house developer resources have been tapped to finish Release-1.

The second, compliance, was a legal overhead to comply with certain Sarbanes-Oxley requirements, so we can have proof of our security practices, effectively in place.

For now, it should suffice to mention that BOTH of such factors are what I’ve come to experience at most companies where software is now an integral part of portfolio – or suite of products – so that it has a direct effect on the bottom-line.

Given these concerns, our team decided to deal with developer scarcity by splitting the single team into two schedules, a morning-section (9 – 12) of maintenance and Scrum stand-ups and the after-lunch section (2 – 6) of new development. We effectively time-boxed ‘maimtenance’ into the mornings.

Judging from the past month, the time-box worked rather well, increasing focus on a single WIP for either task buckets – new dev and maintenance.

However, it also left an unsatisfactory thirst in my appetite for completing a task. While neither task bucket was unattended for very long, this required us to shift our focus from-and-to the two buckets every single day.

The only happy campers are the staffers wanting more maintenance tasks finished off. What cannot be determined currently is whether the new offering portfolio can be fully supported.

The point of running a Scrum Team has been skewed, by non-technical factors. I wonder whether our past year should now be considered a failure in resource management, or a bigger-scoped project management.

Scrum and Kanban

Weeks ago, our team had our first workshop in Agile-Scrum. Our CSM gave an overview of the approach and some of the basic reasons behind it.

Key take-aways:

1. Focused/Vertical dev effort.
2. Flexibility as a built-in application of the method
3. Learn as we do, to lead to doing it better

We also decided to bring in TDD and BDD into the mix, using TDD first, with the goal of using BDD with a future project.

Test-driven development using nUnit, TestDriven.net plugin for VS2010. Behavior-driven development using SpecFlow.

Continuous Integration will soon be in the process, most likely using TeamCity to manage our builds, from SubVersion repository.

Our key concern is the divergence of efforts, as we decided to keep devoting 25% of the team to maintenance efforts of legacy apps, using Kanban approach.

Hoping to increase team productivity, 75% of the team (not on maintrnance track) will be focused on a single item of WIP (work in progress), at any given time. Each sprint is one week long.

Frequent interaction and feedback with product owner is key. Self organization within the team, using pomodoro and time-boxing methods, will be useful.

On the table for client-side testing is employing qUnit to properly test JavaScript/jQuery interactions with the DOM and web services.

Planning to add additional memoir on Scrum/Kanban in the coming months.