All posts by offthemesh

Emergent Architecture

I had wondered about what having an ’emergent’ architecture experience would be like.

To get the full experience, it took two different products and about three years of real time.

First year was about getting the proof-of-concept out the door, focused on the look-and-feel of the front-end component.

The POC was really a reincarnation of a failed project, the result of not adapting the original waterfall design to the changed technology landscape (cost-effective big-data products, for example) well, mostly ending-up in numerous pivot attempts by PM. PMO was unable to recognize the gift of gab. So now we moved to a new roadmap.

Having chosen a big-data database component, we moved to use the flashiest API tech, node.js. Here we hit a roadblock. Although the tech team wanted to validate the vision of continuous integration and delivery, we fell short on being able to execute BDD/TDD approach.

Dropping the test-driven requirement from the project, we moved to settle on an initial architecture that put data-flow and presentation on separate threads of work. Of course we would’ve preferred separate teams, but reality was that we really only had enough time to bring in couple front-end experts. The features were built by employing an approach that resembled Scrum, but modified to fit our company’s state and CMMI maturity level. (scrum-but?)

I decided to accept parts of the applications that seemed to violate my want of an API-centric architecture, because I wanted the POC to come to fruition more than having structure I like.

I struggled with it on a daily basis, but letting it go was better for our environment than enforcing some rules.

The end-result had the look, and the capability to handle most of the workload without too much issues. There were a myriad of issues, both functional and not. We knew going in that the application wouldn’t get much use this first year. We released it to two clients. It wasn’t much of a system, just an app backed by a prototype API. Even the data flow was not fully implemented, able to import a big portion of the datasets, but somewhat hacky and not exactly reproducible at the frequency we wanted. This is how the first year came to pass.

Second year was half spent in supporting the release, short-handed, as the team took some loss in resources – due to the death march prior to the release. But after a few initial months, there was opportunity to work on a new product, using the same system. Many months afterwards were spent in PMO planning.

Luckily, the previous release did not get much use, by the clients - taking less effort to maintain.

The spec came as a almost-a-CRM system that uses a rework of the previous product’s front-end component, inside a web-app. The spec also called for a near real-time customer feedback aggregation and charting -the ‘big data’ aspect of it.

Third year the team built the next version of the system, which uses the previous product as a basis to improve upon.

This it’s where it got interesting. With an actual due date for delivery already set, we had to work with a new data source as the primary data and the CRM aspect added on workload we had not worked on before. We were lucky to have a senior dev who had experience with a public-facing CRM – the work involved and how it may fail grotesquely, if not carefully carved-out.

After a month, it became clear we wouldn’t get the complete spec in time. The management called for vertical implementation of the features, in answer. However, this meant we had to adapt our architecture, as new features became clear enough to work on. So here is where the real meat of this post is.

Architecture that came with minimal pre-planning, answering minor challenges with grace, but doing our best to make sure no sweeping change will be needed. Going in without some core systems design would be opening ourselves to painful nights of haphazard changes and unnecessary risks.

 

Advertisements

Mixing release support and new development

The first thing that comes to one’s mind is “don’t do it!” And frankly, it’s far from best practice.

But my recent project experience is exactly the opposite. With known-not-confirmed feature requests for the next version from the client, our dev team was split into two, because:

  1. the initial release had just occurred, and we had a variety of technical debt mounted.
  2. we needed to estimate roughly how long it would take for us to vertically implement the next release.

In not exactly in the below order, what are the steps we usually go through?

release operation check

pop support requests to fix

resolve issues

qa the fix

release or reopen

discuss new features

estimate roughly, in vertical epics

find ways to componentise at epic level (emergent architecture)

link data-flows between components

define interfaces

prototype interface integrations

work on positive workflows using automated/manual tests

slice epics into stories

resolve any interface or dataflow feature discrepancies

find ways to not reinvent complex wheels

implement as quickly as possible

plan load testing

acceptance testing

risk identify & plan

release and support

Re-defining Software Architecture

The management made the call. Instead of continuing to work on the massive, RIA plug-in-based application, the multi-year project will now aim to deliver smaller/faster release cycles.

I rather welcome the decision. The previous project (a.k.a FRED) cost so much, in both infrastructure and ongoing development, it actually cost the VP his job in the company. (I won’t go into the details of that). The change actually fits Scrum well. Looking back, the FRED project felt like a BDUF. The design/architecture came from a third-party partner, it had modules that were logically separated-out, but it was completely based on a proprietary technology, instead of standards. Such approach was probably the result of a proper explanation of needs. So after more than a year of implementation, we were left with a brain-child product that did not fit the needs of the business.

Sound like any other requirements mismatch episode? I’m not denying it – in fact I accept that. The only thing that matters now is taking the heap and turning it into gold.

The good news is that, in FRED, we have a benchmark to compare ourselves to. The better news? We just released the first Vertical Slice Component – and it took less than half the budget of FRED, in half the time, to deliver a working product that performs well.

Turning the data we have into an asset and a facilitator of new insights into trends and patterns is the goal we have now, not just matching a spec that may “turn out not to be modular afterall”.

Release 2 is in the works. Sprint 19 comes next.

Make it work, at what cost?

Our team has just ‘finished’ the initial release phase of a multi-year project. We acknowledged each others’ effort in a round of handshakes and smiles, knowing we put in an unusual amount of effort within the past week.

Still, the project did not deliver on all of its goals. Regardless of the circumstances, it left a bittersweetness from not seeing the full implementation in action. No one wanted an incomplete deliverable, yet it happened.

It’s time to chew on what went well and what went wrong – to learn from the experience and to also evaluate the points from process and cost perspectives.

What went well:

  1. Cohesive teamwork
  2. Focus on the deliverables
  3. Selecting appropriate technologies to use
  4. Willingness to go the extra mile
  5. Getting the needed support from management

Yet there are also what went wrong:

  1. ‘Pockets’ of project information, where only a few people know the critical data/logic
  2. Insufficient time to completion
  3. Getting critical data at the last minute
  4. Changes in solution approach, late in the project

Commitment from development team members definitely shined through the entire process, no one doubts that.

Process and Cost Analysis

One approach is to see how much technical and other debts we have on the project, then estimate the amount of effort required to reduce the debts.

Technical Debt

I am finding that there is quite a bit of technical debt on multiple fronts:

  • Bugs in the UI Event-handler process(es), that are contributing to data interchanges at unintended points in the process. (May have to do with various IoC done using Unity Application Block).
  • Instead of minimal message package, the service-consuming application is invoking a service with an entire object, serialized in XML (already too verbose).
  • Data persisted with MongoDB also houses unnecessary data.
  • Data retrieval takes too long.
  • Logic applied in retrieving data is not finalized.

Documentation Debt

  • Reasons behind some of the mid-project changes in solution approach
  • Service APIs are not ReSTful yet, and API documentation is lacking
  • UI/Event + Unity process documentation is lacking

All this debt, on both technical and documentation aspects – it makes me wonder how the project was deemed successful by project management.

Among the documentation debts, not knowing the reasons behind some of the solution approach changes is a very cumbersome item. While not having documentation can always be a cumbersome item, this is in regards to the real ‘reasons’ behind the changes – therefore this situation is usually more problematic than something like a missing API documentation, for example. Developers, in general, do not have to concern themselves over the business reasons most of the time, but project management definitely does. If the reasons were technical in nature, or involved technical input, then the developers should also be aware of the reasons, to a reasonable extent.

Maintenance within a Single Scrum Team

During the past month, a relatively new form of resource management has taken afoot at within our team.

Two separate factors contributed to this shift of developer resources:

1. Budget Constraints

2. Compliance Requirements

The first, budget, came from the management’s decision not to extend most of consultancy contracts – all of which were already overdrawn, for a specific product. With new product offering in the company portfolio for next year, the in-house developer resources have been tapped to finish Release-1.

The second, compliance, was a legal overhead to comply with certain Sarbanes-Oxley requirements, so we can have proof of our security practices, effectively in place.

For now, it should suffice to mention that BOTH of such factors are what I’ve come to experience at most companies where software is now an integral part of portfolio – or suite of products – so that it has a direct effect on the bottom-line.

Given these concerns, our team decided to deal with developer scarcity by splitting the single team into two schedules, a morning-section (9 – 12) of maintenance and Scrum stand-ups and the after-lunch section (2 – 6) of new development. We effectively time-boxed ‘maimtenance’ into the mornings.

Judging from the past month, the time-box worked rather well, increasing focus on a single WIP for either task buckets – new dev and maintenance.

However, it also left an unsatisfactory thirst in my appetite for completing a task. While neither task bucket was unattended for very long, this required us to shift our focus from-and-to the two buckets every single day.

The only happy campers are the staffers wanting more maintenance tasks finished off. What cannot be determined currently is whether the new offering portfolio can be fully supported.

The point of running a Scrum Team has been skewed, by non-technical factors. I wonder whether our past year should now be considered a failure in resource management, or a bigger-scoped project management.

Scrum and Kanban

Weeks ago, our team had our first workshop in Agile-Scrum. Our CSM gave an overview of the approach and some of the basic reasons behind it.

Key take-aways:

1. Focused/Vertical dev effort.
2. Flexibility as a built-in application of the method
3. Learn as we do, to lead to doing it better

We also decided to bring in TDD and BDD into the mix, using TDD first, with the goal of using BDD with a future project.

Test-driven development using nUnit, TestDriven.net plugin for VS2010. Behavior-driven development using SpecFlow.

Continuous Integration will soon be in the process, most likely using TeamCity to manage our builds, from SubVersion repository.

Our key concern is the divergence of efforts, as we decided to keep devoting 25% of the team to maintenance efforts of legacy apps, using Kanban approach.

Hoping to increase team productivity, 75% of the team (not on maintrnance track) will be focused on a single item of WIP (work in progress), at any given time. Each sprint is one week long.

Frequent interaction and feedback with product owner is key. Self organization within the team, using pomodoro and time-boxing methods, will be useful.

On the table for client-side testing is employing qUnit to properly test JavaScript/jQuery interactions with the DOM and web services.

Planning to add additional memoir on Scrum/Kanban in the coming months.