From Requirements to Algorithms: The Evolving Role of the BA
--
I recall very well the moment, (since it was so profound to me), sitting in front of a Chief Medical Officer at a mid-size hospital network who stated, “We don’t need a Business Analyst; we need someone who speaks to the machine.” At the time I chuckled graciously and explained what I do; however, I now fully understand what he was conveying and honestly feel he was more right than he would ever know.
Through my two decades at the intersection of technology and some of the most important industries on this planet (healthcare, financial technology and SaaS platforms with millions of users), I received a doctorate in project management — specifically focusing my research track on AI risk governance, and have spent countless hours transforming the complexity of humanity into a structured logical flow. However, none of my academic training prepared me for the transition that I have personally experienced regarding the definition of being a Business Analyst today.
The Business Analyst role has not merely evolved. It has been completely reinvented.
THE OLD CONTRACT
The core purpose of a business analyst (‘BA’) was to gather, document, and validate requirements before handing them off to technical teams. As such, the business analyst was the bridge between business stakeholders and technical developers. The BA’s output consisted of use cases, process flow diagrams, and the facilitation of stakeholder workshops — ensuring that what business stakeholders expressed as their desired outcome was at least close to what the technology department developed. This was a chaotic, political and very much an everyday human experience and I truly enjoyed it.
With regards to healthcare, this meant sitting with clinicians for hours so that I could completely understand how patient intake works, the disasters caused by getting a patient insurance pre-authorization and how a triage nurse actually uses the system as opposed to how the system vendor envisioned this nurse would use it. In fintech, this meant mapping the logic behind transactions, determining which regulations must be followed when processing a transaction and determining the thresholds for fraud detection to create the exact business rules for each of those items. With regard to development for SaaS, this meant me developing a strong enough understanding of each user persona to turn their frustrations into product features that would actually keep them from leaving the application.
Requirements were the deliverable. The BA was the translator.
That era did not disappear — but it fractured.
WHEN AI ENTERED THE ROOM
The first time I was pulled into an AI project was at a healthtech company building a predictive readmission model. The data science team was brilliant. They could build models I could barely conceptualize. What they could not do was answer the question: “What does the hospital actually need this to do, and what happens when it’s wrong?”
That question — deceptively simple — is where the modern BA lives.
Because when you move from traditional software to AI-driven systems, the nature of a “requirement” changes at its core. In conventional development, a requirement is a defined behavior: if the user enters X, the system does Y. It is deterministic. Testable. Traceable. In an AI system, particularly in machine learning or generative models, the system learns behavior from data. The output is probabilistic. The “logic” is encoded in weights and parameters that no stakeholder meeting will ever surface.
This is not just a technical distinction. It is a philosophical one, and it changes every artifact a BA produces.
In that readmission project, I had to stop writing functional requirements in the traditional sense and start writing model objective statements, acceptable error tolerance thresholds, fairness criteria across demographic groups, explainability requirements for clinical staff, and escalation protocols for when the model’s confidence fell below a defined level. We were not specifying software behavior anymore. We were defining the boundaries of autonomous decision-making in a clinical environment. The stakes — patient safety, liability, regulatory exposure — were extraordinary.
Experiences such as this have caused me to rethink my view of the profession.
THE NEW REQUIREMENT: UNDERSTANDING ALGORITHMS
In my experience working in health care, finance and SaaS (software as a service), I have found that BAs (Business Analysts) who excel at working in AI-related areas do not necessarily need to be able to code (though being able to code is helpful). What is key is the BA’s ability to understand how algorithms fail and why that will impact the business.
When I worked as a BA in Finance I built and supported credit scoring models, monitored AML (Anti-money laundering) transactions, and created robo-advisory platforms. During this work I learned that the most dangerous point of any project is not when you’re developing it, but rather during the requirement gathering phase where people involved in the project assumed that an AI would be able to “figure it out.” I spent considerably more time challenging the assumptions people made about what their requirements were than documenting them. For example, I would always ask the question — “what data are you using for the algorithms?” Are these figures a true reflection of our current clients, or do they predominantly come from previous customers due to model drift? What are the regulatory standards that determine what is classified as an acceptable level of model drift? How will we justify to our client that their loan application was rejected based on the algorithms that we employ?
These are not data science questions. They are business analysis questions wearing a new coat.
In SaaS, the dynamic is slightly different but equally complex. Product teams move fast. AI features get shipped inside sprint cycles, often without rigorous impact assessment. I have seen recommendation engines quietly amplify user behavior patterns in ways that increased short-term engagement but created long-term churn. The algorithm was working exactly as optimized. The requirement was wrong. And no one caught it because the BA had been reduced to a ticket-writing function rather than a strategic analytical voice.
That reduction is a failure of organizational design, and it is one I push back against constantly.
THE RISK DIMENSION NO ONE TALKS ABOUT ENOUGH
My doctoral research studies both AI governance and project risk; however I can confirm from my observations that much of the failure rate associated with AI projects across regulated industries is not due to technical failure; but, rather, is attributed to requirements failure.
In short; the algorithms work as designed and trained. Yet, they are not performing at their full potential due to the quality of their training data being insufficiently specified or validated; or, issues with the full understanding of the context within which the algorithms will be used. For example; in a medical application; an algorithm designed to predict sepsis based upon data from a controlled environment such as an intensive care unit will yield a very different set of results when applied to patients within a rural hospital using different types of documentation. With financial products; algorithms designed to detect fraud in a laboratory setting may cause many false positive results when rolled out into the market as customers could lose confidence in the product because of repeated false positive results within 90 days.
Business Analysts who have the ability to understand and effectively mitigate not only project risk; but also algorithmic, ethical, and regulatory risks will ultimately be the individuals who save organisations from making poor decisions as a result of the misalignment between the expectations established by management and the results delivered by algorithms.
FUTURE DIRECTION.
The BA role is not being automated away. It is being elevated — for those willing to evolve. The professionals I see struggling are the ones who define their value by deliverable type: “I write BRDs, I draw swim lanes, I run JAD sessions.” Those artifacts still have value in the right context. But if your identity is tied to the artifact rather than the analytical function behind it, AI will absolutely marginalize you.
The professionals I see thriving are those who have leaned into ambiguity as a discipline. Who treat a poorly defined AI objective the same way a good clinician treats a vague symptom — not as an inconvenience, but as the actual problem to solve. Who understand that in a world of probabilistic outputs, precision in human intent matters more than ever, not less.
I did not get a doctorate to become an expert in requirements documents. I did it to become an expert in how human organizations make decisions under uncertainty — and how to structure that uncertainty into something a system, human or artificial, can act on responsibly.
That work has never been more needed. And I have never been more certain that the Business Analyst, reimagined and retooled, belongs at the center of it.