Developing solutions using artificial intelligence techniques should be treated much more like an iterative experimental process than a traditional software development project.
In many cases the certainty of the outcome can’t be confirmed until a model has been built, trained and evaluated.
Because of this many organisations do not have existing methodology suitable for this type of work. I have found great success adapting and using the Government Digital Services (GDS) Agile Methodology for this type of work .
The GDS website has a large amount of information and guidance on this methodology so I won’t repeat it here, however when including data science, the following additional considerations are useful:
Discovery – Develop a business case
- Understand the need or problem and determine if artificial intelligence is really needed!
- Do some initial data discovery to identify what data is available
- Complete any governance needed to use real data for model development and other activities in the alpha phase
- Develop at least a high level business case (enough to gain funding for Alpha) and identify the sponsor
Alpha – Create and test a prototype
- Build a model that demonstrates that production level performance is achievable, evaluate a range of models
- Validate any assumptions and evaluate any risks made in discovery
Beta – Move to production
- In many enterprise organisations the effort required during the beta stage is significant, especially if implementing machine learning models is new to the organisation. It will often encompass most of the usual software delivery methodology steps.
- All the required testing and validation should be completed for model production deployment. Consideration of things like bias, misuse, edge cases, ethical data usage, model management and the skillset needed to own, monitor and retrain need to be implemented.
Live – Move to business ownership and ongoing model support
- To fully handover the product for business as usual it is important to ensure the business has implemented all the required controls and process for model monitoring and retraining and agreed the support model for this.
- Ongoing retraining and model improvement also needs to be considered, especially if the operating environment is dynamic and likely to change.
Implementing this approach has a number of benefits in terms of managing activities and expectations of stakeholders. Ideally the funding approach should and benefit expectations should consider the process to be a funnel with many ideas coming into discovery, and fewer moving through each stage as they are validated or demonstrated to be non-viable from a production value perspective.
I’ve only scratched the surface in this article, but hopefully it’s a useful starting point. Get in touch and let me know if you have used something similar or would like support for your business