Be on Time, on Budget - or Don't Bill the Client
Humor me for a minute. What would you need for your software development business if this was the rule: You don't bill your client, if you delivered late or used more than the agreed upon budget?
Of course, it's a crazy idea. This is not how things work in our industry. But, well, what if that was what clients demanded?
Before you rack your brain one more piece of information: Of course the scope is fixed, too. Otherwise you'd argue "I just deliver less on time, on budget." But that's surely not what clients want. The want it all: full scope on time and on budget. Time and budget are sized with regard to a certain scope.
You're right if you now see a completely fixed iron triangle appearing in front of you. It truly is iron because there is no leeway.
So let me ask again: What would the conditions be under which you'd be willing to enter such a contract? What would you need?
Think about it for a moment…
And then check against my thoughts:
Small Scope
My primary precondition would be a small scope, even a tiny scope. Requirements would need to be sliced very, very thinly. The deadline for delivery would not be more than 16 working hours in the future. Example: You start today at 9:00 and have to deliver tomorrow at 17:00.
Why 16 hours? That's the maximum time span for which I'm assuming some control over my time/attention. I could even lock myself into the basement and not respond to any notifications for that time. No distractions, no interruptions would be allowed for 16 hours, and even more so for 8 or 4 hours.
Clarity of Requirements
Even this small scope I would not tackle without crystal clear requirements. What does that mean? I need to know at least two things:
One function (or a list of functions) as entry points to the logic implementing the requirements. I want to know the exact function signature. Entry points get triggered by user to elicit behavior from a software.
A list of acceptance criteria which can be automated. Requests and expected responses for the entry points need to be agreed upon with the customer/PO.
The acceptance criteria might include response times and other non-functional aspects. They too have to specified exactly as minimum criteria.
What if the requirements are just about the UI? Then a UI blueprint/sketch/wireframe needs to be provided. But I think this is a lesser problem.
Risk Assessment
Given a small scope and clear requirements I should be able to tell the client my time and budget needs, right? But isn't that where things go off the rails? Our estimates are so off, most of the times. That's why the above proposal is so crazy.
What do I need to fix that?
My answer is: I don't estimate.
Instead I forecast. Forecasts are based on objective historical data. With forecasts I get a probability distribution of possible futures. Here is an example of what I mean:
The yellow numbers represent the number of hours it could take to implement the requirements: 1, 2, … 8 hours.
The blue numbers show the likelihood of me managing to deliver in that time (blue bars), e.g. 35% (p=0.35) to finish in exactly 2 hours or 20% (p=0.2) to finish in exactly 4 hours. As it's obvious, no single duration (cycle time) is very likely. Would you bet 10€ or 100€ on an outcome with a probability of just 35%?
The red numbers show the percentile of a certain cycle time, i.e. its cumulative probability (red line). Example: Using 2 hours or less has a probability of 50% (p=0,5), 4 hours or less has a probability of 70% (p=0,7), 6 hours or less has a probability of 90% (p=0,9). Would you bet on 2 hours with a 50:50 chance of getting the client's money? Or do you prefer less risk aka a higher probability? Maybe you're more of a guy who likes to bet on rolling a number in the range 1..5 when rolling a dice which would mean 87% probability; so you'd choose at least 6 hours with a 90% likelihood.
Do you see the difference to the usual estimations? Nobody knows the probability when someone says "I guess I'll need 6 hours for that." It's just a gut feeling. But where does that come from? How reliable is it?
The probabilities in the figure above, though, are based on historical data. Every delivery so far was recorded with certain parameters, e.g. was it work on the backend or who worked on the issue or what was the estimated complexity etc.
To calculate a forecast I select relevant past issues and compile them into a distribution. The assumption is: if the next issue is similar to past issues, then it's likely to take as long as they did.
A prerequisite for that is, of course, is "an environment" that's similar to the one in the past. E.g. tool set, programming language, team are (pretty much) the same.
Of course a forecast is also just a prognosis like an estimate. It's a statement about the future - which is fraught with uncertainty. But at least the uncertainty is tangible, it can be seen in the diagram. That way I can compare it to my risk sensitivity.
If, for example, a 87% probability would only be possible for 20 hours or less, then I would not enter into a contract/not promise delivery on time/budget. But if 15 hours had that probability, I would.
Compensating Risk
Even if I picked a 100% value like 8 hours in the above diagram that would not mean it's guaranteed I will deliver in 8 hours. The new issue and/or the current environment might be different from the previous ones. So I better be cautious and add a compensation for the remaining risk. It might happen I do not deliver on time and don't get any money from the client.
Should I hence add padding to the time value? No, I rather add a risk margin to my fee.
Assuming I get 100€ per hour and promise to deliver in 8 hours my regular fee would be 800€ for the issue. And that might work often, but not always.
Time will tell how often I live up to my promise and rake in the money. That also will get a probability. For now let's assume I fail in 10% of the cases and the average regular fee is 1000€. Then is seems prudent to add at least a 100€ margin; my fee would be 900€ instead of 800€ for the next issue.
The 100€ would go right away into a risk fund out of which I compensate myself should I not deliver as promised.
Bottom line:
There is no certainty in software development. Even with forecasts instead of estimations things might not work as predicted. So I better get a handle on the expected uncertainty.
Plus I should employ a risk mitigation device: an insurance. That's what the risk fund is.
But what if the customer wants a quicker delivery? I feel confident with 8 hours/900€, but he wants to press me to deliver in 6 hours. Should I agree? In the above chart 6 hours still have a probability of 90%. Maybe that's an ok risk. It would at least mean a fee of 600€+100€=700€.
Would adding a risk premium make things better? I could offer 6 hours for 800€ and put 200€ into the risk fund, should I succeed. For the client on the other hand the chance of me not succeeding is increased; that might motivate him to accept the risk premium: there is a chance for him of not paying anything.
Or should I ask for part of the fee to be paid in any case? Example: 700€ for 6 hours, with at least 200€ to be paid, even if I don't make it on time? (That would run against the basic setup, though, which states: no money if contract not met.)
In any case it's a matter of the historical data and my assessment of the issue in question and the overall environment and situation. Most important is to be aware of the uncertainty and possibility of variation.
Clean Code Development
Forecasting requires a certain stability of the overall environment. That includes the code I am writing.
However, the code necessarily changes with each implemented issue. The more the higher the variation distorting the probability distribution.
How can I mitigate this effect? The answer is easy: I better apply the principles and practices of Clean Code Development. The whole point of Clean Code is to allow for higher productivity over a longer period of time by making it easier to add changes. Clean Code is more malleable, easier to understand, easier to check for maturity and stability.
Clean Code includes basing the code on an architectural paradigm allowing for quick enhancements. The OCP comes to mind: I want to be able to easily add extensions to existing code for new features - without the need to change existing logic which could cause a regression.
The IODA Architecture and Vertical Slices seem to be a good fit for that, maybe even Event Sourcing.
Anything supporting easy scalability of the code base is welcome.
Promise Like a Pro
A contract like the one in question is a "result promise". Result promises are always precarious. There is a comparatively high chance of not fulfilling them. But clients like them. That's why I am asking the question: What's needed to offer them?
Not all requirements can be implemented with a "result promise" like above, though. Even not after huge slicing efforts. What to do about them?
In those cases I am offering a "behavior promise": I don't promise to deliver a result on time/budget, but focus on the issue for a certain amount of time while reporting continuously about my progress.
I avoid "manufacturing a product" and instead offer "a research project". I don't promise "Will deliver in 8 hours for 800€ - or you don't pay me." but "I will dedicate 2 hours per day for the next week to this issue. After each such session I will report my findings and my progress. Each session will be 200€."
This is plan B whenever I cannot come to an unambiguous contract with the client (see small scope and clear cut requirements above). Without such a contract I better not touch production code. Instead it's time for a prototype/proof-of-concept/spike, i.e. some non-production code (if code at all).
The client has to understand that production code changes require a certain clarity and stability of the requirements. Without that production code will deteriorate even quicker due to additional changes - plus the risk for me is increased to not fulfill the contract.
Differentiating between "result promise" and "behavior promise" is what I call Promise Like a Pro. Because a pro looks closer, has more than one tool in his pocket, and is ready to say no to unfeasible/unrealistic requests by the client.
Bonus: Provoke Variation
Forecasting works best with low variation or when there are patterns. But then there is Murphy, there are all kinds of surprises. Don't they contradict using forecasting?
How about this: Instead of praying variation does not kill my patterns, I provoke variation and make it part of the patterns? How about making the surprising the ordinary?
Example: People getting sick on a team or being replaced by a new developer is annoying. It's disrupting the flow, distorting the patterns. I could hope for this not to happen - or I could employ Dynamic Reteaming. Dynamic Reteaming changes teams consciously and "prematurely". It does not wait for something to happen to overthrow the team dynamics; it purposefully inserts new elements.
With Dynamic Reteaming a team change would not show up as a pattern disruption in the historical data; it would be part of the patterns.
Also with an "ever changing team" a code base would be structured and documented in a different way; it would become naturally cleaner because all the time someone new has to be onboarded and quickly brought up to speed and made a true contributor instead of a burden.
What else could be done to provoke variation? Maybe source code could be changed in a kind of random way by an AI to introduce bugs? Bugs that of course should be caught right away by automated tests.
If that's not the case the bug would be flagged so that a test case is added. This would promote testability as the bedrock of a fast moving team. (Yes, this also would add noise, additional work - but with a purpose. It's like Netflix's chaos monkeys.)
Conclusion
Thanks for staying with me through this thought experiment. What do you think, is that enough to enter a "result promise" contract as suggested initially? Or would you need more to dare it?