Posted

by Ara Kouchakdjian, VP Product Management at Stanley Healthcare

November 2019 –  The challenge: Figuring out a release date

For non-engineers, software processes are a black box.  Those of us facing the market want a simple answer to the question: “when can the client generate value from the new release?” Explanations from many product and engineering leaders are factual (such as, ‘we’ve completed half of our story points’), but they are not externally usable.  Other times, the answer includes ‘well, it depends,’ which also does not answer the question.

Some might say release dates don’t matter. In 2017, Amazon claimed they release software on an average of every 11.7 seconds.  At the same time, Facebook said it released software 50 times a day. Does that mean release dates don’t matter at all? I assert that they still matter.

In today’s world where customers have more distractions and shorter attention spans, the old adage ‘you only have one chance to make a good impression’ is more true now than ever.  Today’s social media environment amplifies negative experiences, so the cost of bad impressions is high.  Additionally, in healthcare, where so much workflow is regimented, change must be managed for success. We need customers to be successful the first time they do something new. After all, it’s not about the speed of release, it’s about the speed by which value is achieved by the customers!

Is there a rule to help predict release dates?  Yes there is.

There are precious few rules for software engineers that have stood the test of time.  One is Brooks’ Law, which states ‘adding engineers to a late project makes it later.’  Another is Moore’s Law, which (paraphrased) states that processing power will double every 18 months.

Another, which actually dates from 1984, can help address the schedule question: 20-30% of defects are not initially fixed successfully.

In 1984, E.N. Adams published “Optimizing Preventive Service of Software Products” in the IBM Journal of Research and Development (a variation of that article was published in IEEE Transactions on Software Engineering in 1985).  Among the many insights he derived from reviewing failure history for 9 IBM products, was that 20-30% of defects are incorrectly fixed.  An IBM Fellow, Harlan D. Mills, asked his teams to re-confirm that finding and use it to help make software development more predictable.

How does one leverage this rule? Predicting release dates

Anyone can use the 20-30% unsuccessful fix rate to estimate when software will be ready for release.

As Engineering or QA are conducting system tests (ensuring new functionality works as intended and old functionality didn’t break), simply ask three questions:

  • How many critical defects have been found in this round of testing? In most organizations, critical is defined as Severity 1 (prevents primary business function for software) and Severity 2 (prevents key subsystems from successfully operating).
  • How long does it take to do a round of testing?
  • How long does one typically require/allow to fix the defects uncovered?

Let’s illustrate with a concrete case I had in the past.  An organization in the midst of a transition to agile, had 20 Sev 1 and Sev 2 defects from a first round of full system testing.  The team could do a round of testing in 2 days, and would spend about 3 days fixing defects from a round of testing (in other words, they’d do a full round of testing once a week).

Testing Round Severity 1 and 2 Defects Days to Fix Days to Retest
1 20 3 2
2 4-6 (expected) 3 2
3 1-2 (expected) 3 2

 

We estimated 3 weeks to have zero Severity 1 and 2 defects, and we were right.

Note that the rule can be applied to an organization that has a continuous integration / continuous test approach (where much of the testing is automatically executed whenever engineers hand over their code).  If one has more than 4 or so Sev 1 or Sev 2 defects after a night of testing, or if nightly testing is growing the backlog of high severity defects, you’re going to fall behind!  The key point here is that ‘a defect found is not a defect fixed.’  It often takes a couple of rounds of fixing to get it right.

After applying this approach across over a hundred releases in many organizations, I am convinced the information from a round of system testing is a great predictor of when the software will be ready to release.

Why is this true?  It’s hard to translate business needs into software!

Adams’ rule from 1984 has stayed true for over 30 years. What explains it?  In a world where we are swapping technologies fairly often, and software engineers are working here and abroad, why is the predictor still accurate?

There are likely many factors and this can be a great end-of-day discussion at the office.  In deference to Occam’s Razor (the solution with the fewest assumptions is typically correct), I suggest that translating business needs into software is hard and error prone.  In other words, the market need is not easily mapped to engineering instructions.

To illustrate: Current patient BMI must be tracked and used for quality scoring, case management, etc.  In concept, saving BMI is easy, but using it is much more complicated as quality scoring requires updates in real time (to address care gaps while a patient is with the clinician), while case management wants updated BMI at the end of an inpatient encounter (to drive follow-up after discharge).  Imagine an engineer trying to code the latter case, without fully understanding that they must find the latest BMI between the most recent admission and discharge date/time stamps?

As one can see, without a deep understanding of all use cases, it’s easy to build it – and fix it – incorrectly.

Conclusion

If you accept that 20-30% of defects are not fixed correctly the first time, you can predict delivery dates with a small amount of information!

 

About the Author

Ara Kouchakdjian, M.S., is the Vice President of Product Management at Stanley Healthcare.  Stanley Healthcare helps caregivers deliver better care.

He brings over 30 years of experience in delivering success to customers through innovative solutions.  Healthcare experiences cover the breadth of patient engagement, clinical workflow, machine learning/artificial intelligence, digital therapeutics, and data strategy.

A results oriented executive, focused on generating customer value, Ara takes a collaborative approach to ensure product/market fit, transforming market pain points into customer successes. A believer in Maxwell’s ‘success is a series of small wins,’ the hallmarks of Ara’s solutions are simplicity, ease of adoption, and visible near term success. He has grown and led high-performing teams at multiple organizations, consistently taking a customer-first approach.

Ara led Product Management during Silverlink Communications growth phase, delivering personalized voice communications in volume, later introducing the first HIPAA-compliant omni-channel communications platform.  Most recently, he established an AI center of excellence and drove a data strategy at athenahealth. In between, he has increased revenue and customer satisfaction at GNS Healthcare, lifeIMAGE, and a number of other organizations.

He has led engineering, professional services, product management, regulatory affairs, and sales engineering organizations.  In addition to Healthcare, Ara has worked in telecommunications, storage, defense and aerospace.  Honors include multiple ‘product of the year’ awards, ‘Best in KLAS’ designation, and recognition by NASA and other organizations.

Ara received a AB in Computer Science from Columbia College, Columbia University.  He also received an MS in Computer Science (Experimental Software Engineering) from the University of Maryland.


Leave a Reply