Tag Archives: Architecture

What do you mean Governance ?

Process Governance, Information Governance, Asset Governance, Data Governance, IT Governance – if you are working within a medium-to-large enterprise, there is a good chance you’ve heard of the term ‘governance’. Read on to learn more about the types that exist, and some conceptual details about each.

Common Governance Types (or “domains” that benefit from Governance)

Governance approaches and controls can be applied to almost any entity or subject that the organization has a practical reason to put control mechanisms or processes in place – whether it is for Regulatory Compliance or other operational benefits.

Corporate Governance

Starting at the highest level for a private organization (for public organization, it would be the the national Government, for example) is Corporate Governance – which can be defined as:

The operating model with rules and practices by which a board of directors ensures accountability, fairness, and transparency in a company’s relationship with all its stakeholders (financiers, customers, management, employees, government, and the community).

(many definitions exist – for reference)

Corporate governance is often a comprehensive look at both structures and relationships that determine the direction of the corporate entity. In this perspective, the shareholders and the management are the primary participants, other than the board of directors – which is typically a central role, at this governance level. Employees of the company also get a seat in many corporate governance settings, as well as representative parties of customers, suppliers, and creditors – working within the constraints of various legal/regulatory and ethical/institutional rules and regulations that may supersede whatever the manner of governance the corporation adopts.

Corporate Governance Framework – Wikimedia Commons

Information Governance

Providing a bit more ‘conservative’ perspective on the practice of putting in measures to mitigate the risk of incorrect or outright wrong information – is the level of information governance. There are multiple functions that need to be managed in how technologies are utilized, and typically as a partnership with Information Security organizations, programs are put in place to indicate:

  • What information is retained
  • Where it is stored
  • How long it is retained
  • Who has access (and what sort of access) to it
  • How that data is protected
  • How policies, standards, and regulations provide assurance

The challenge many organizations face is connecting these programs under one umbrella and correctly assigning ownership – sometimes to legal, sometimes to IT, and sometimes to compliance. Each organization is different, but in general, the following diagram describes the strategic vantage point the governance program can take:

Information Governance (simplified – click for more)

Operating information governance models may differ in structure or ordinal – but stakeholder perspectives hold true almost all the time. Partially due to the recent decades’ explosion of data volumes and the subsequent regulations and compliance-issues increases, traditional ‘records management’ capabilities failed to keep pace – requiring a more descriptive maturity model. This is due to the need for organizations to deal with many different standards and laws that apply to information handling, such as:

  • The Computer Misuse Act of 1990
  • The Data Protection Act of 1998
  • The Freedom of Information Act of 2000
  • The Privacy and Electronic Communication Regulations of 2003
  • Payment Card Industry Data Security Standard (PCI)
  • Health Insurance Portability and Accountability Act (HIPAA)
  • General Data Protection Regulation (GDPR)
  • California Consumer Privacy Act (CCPA)
  • and more…

As information resources are effectively supporting the business goals, the organization can accomplish its strategic goals more efficiently – because information governance should not be just sponsored by executive leadership, but be led at the enterprise level. However, there are other moving blocks, as well.

Data Governance

The difference between information and data may not be as clear-cut as software and hardware assets. After all, information governance outlines responsibility and decision-making accountability – while data governance is focused on the management of unprocessed information (data) at the business-unit level, typically:

  • Availability (scope/delivery)
  • Usability (structure/semantic)
  • Integrity (referential/consistency)
  • Security (access/retention)

With the need for business intelligence, data governance has become a priority in many organizations to be able to produce reports to meet the regulatory needs. Irrespective of compliance needs (similar to Information Governance compliance needs) at the data level, organizations of medium-to-large sizes inevitably recognize that cross-functioinal tasks can no longer be implemented efficiently.

Notably, technical capabilities are more concentrated with fewer professionals in the industry – as technology advances have allowed for various levels of Information Technology (IT) to converge. Similar to software solutions engineering practice, governance at the data level now requires tactical deployment to be delivered to provide quick ‘wins’ and avert organizational fatigue from a larger, more monolithic exercise.

Advertisement

Emergent Architecture

I had wondered about what having an ’emergent’ architecture experience would be like.

To get the full experience, it took two different products and about three years of real time.

First year was about getting the proof-of-concept out the door, focused on the look-and-feel of the front-end component.

The POC was really a reincarnation of a failed project, the result of not adapting the original waterfall design to the changed technology landscape (cost-effective big-data products, for example) well, mostly ending-up in numerous pivot attempts by PM. PMO was unable to recognize the gift of gab. So now we moved to a new roadmap.

Having chosen a big-data database component, we moved to use the flashiest API tech, node.js. Here we hit a roadblock. Although the tech team wanted to validate the vision of continuous integration and delivery, we fell short on being able to execute BDD/TDD approach.

Dropping the test-driven requirement from the project, we moved to settle on an initial architecture that put data-flow and presentation on separate threads of work. Of course we would’ve preferred separate teams, but reality was that we really only had enough time to bring in couple front-end experts. The features were built by employing an approach that resembled Scrum, but modified to fit our company’s state and CMMI maturity level. (scrum-but?)

I decided to accept parts of the applications that seemed to violate my want of an API-centric architecture, because I wanted the POC to come to fruition more than having structure I like.

I struggled with it on a daily basis, but letting it go was better for our environment than enforcing some rules.

The end-result had the look, and the capability to handle most of the workload without too much issues. There were a myriad of issues, both functional and not. We knew going in that the application wouldn’t get much use this first year. We released it to two clients. It wasn’t much of a system, just an app backed by a prototype API. Even the data flow was not fully implemented, able to import a big portion of the datasets, but somewhat hacky and not exactly reproducible at the frequency we wanted. This is how the first year came to pass.

Second year was half spent in supporting the release, short-handed, as the team took some loss in resources – due to the death march prior to the release. But after a few initial months, there was opportunity to work on a new product, using the same system. Many months afterwards were spent in PMO planning.

Luckily, the previous release did not get much use, by the clients - taking less effort to maintain.

The spec came as a almost-a-CRM system that uses a rework of the previous product’s front-end component, inside a web-app. The spec also called for a near real-time customer feedback aggregation and charting -the ‘big data’ aspect of it.

Third year the team built the next version of the system, which uses the previous product as a basis to improve upon.

This it’s where it got interesting. With an actual due date for delivery already set, we had to work with a new data source as the primary data and the CRM aspect added on workload we had not worked on before. We were lucky to have a senior dev who had experience with a public-facing CRM – the work involved and how it may fail grotesquely, if not carefully carved-out.

After a month, it became clear we wouldn’t get the complete spec in time. The management called for vertical implementation of the features, in answer. However, this meant we had to adapt our architecture, as new features became clear enough to work on. So here is where the real meat of this post is.

Architecture that came with minimal pre-planning, answering minor challenges with grace, but doing our best to make sure no sweeping change will be needed. Going in without some core systems design would be opening ourselves to painful nights of haphazard changes and unnecessary risks.

 

Re-defining Software Architecture

The management made the call. Instead of continuing to work on the massive, RIA plug-in-based application, the multi-year project will now aim to deliver smaller/faster release cycles.

I rather welcome the decision. The previous project (a.k.a FRED) cost so much, in both infrastructure and ongoing development, it actually cost the VP his job in the company. (I won’t go into the details of that). The change actually fits Scrum well. Looking back, the FRED project felt like a BDUF. The design/architecture came from a third-party partner, it had modules that were logically separated-out, but it was completely based on a proprietary technology, instead of standards. Such approach was probably the result of a proper explanation of needs. So after more than a year of implementation, we were left with a brain-child product that did not fit the needs of the business.

Sound like any other requirements mismatch episode? I’m not denying it – in fact I accept that. The only thing that matters now is taking the heap and turning it into gold.

The good news is that, in FRED, we have a benchmark to compare ourselves to. The better news? We just released the first Vertical Slice Component – and it took less than half the budget of FRED, in half the time, to deliver a working product that performs well.

Turning the data we have into an asset and a facilitator of new insights into trends and patterns is the goal we have now, not just matching a spec that may “turn out not to be modular afterall”.

Release 2 is in the works. Sprint 19 comes next.