In this blog from lead data scientist, Emily Tolentino, you’ll learn about a typical LexisNexis® Risk Solutions approach to responsible model development – and why getting it right matters.
My name is Emily Tolentino, and I am a Lead Data Scientist at LexisNexis® Risk Solutions focused on Governance and Responsible AI. Our program is focused on responsible model development and maintenance and represents one of many components in product development.
I work with other passionate colleagues to build a framework to help our data science and product teams build and maintain world-class solutions that encompass both efficacy and the responsible use of AI.
Every day we are eager about our work because we know the positive impact that our solutions can have on customer outcomes and how they help people live life secure and less interrupted.
Not all AI tools are created equal. Many could be poor-performing products in a shiny ‘AI’ wrapper. My colleagues and I make sure that’s not the case. We are passionate about creating a product that works and is fair and fully documented. We are dedicated to building products that make us proud. It’s not about ‘checking the regulatory box’. This is our mission.
We take new production models through a series of reviews during their development phase and we extensively interrogate them. This typically includes multiple phases of reviews by domain experts in both data science and financial services best practices. These technical reviews evaluate the solution’s rigor and reasonableness, assessing the usage of researched best practices and reliable tools and hearing the full model story. No one person knows everything, so we bring people together to question and challenge one another to achieve a result better than what any individual could accomplish alone.
A certification flow in LexisNexis® Risk Solutions involves many different teams and impacts all stages of development. Here is one such example:
We set high expectations on what's required within each review and a solution needs to pass a series of checkpoints with evidence. It’s about breaking down siloes and applying constructive friction throughout development.
Feedback isn’t the end of the conversation. It initiates iteration, prompting teams to revisit and refine their work.
Our products drive decisions that affect real people, so it’s vital to maintain resilient and effective products. If the model is deployed, we monitor and maintain it through intentional revalidation and remediation. Deploy and forget is not our style.
We have recurring model revalidation tasks that are an enormous undertaking from all the teams involved, but a necessary endeavor to ensure models perform and continue to perform as expected.
Even when processes feel elongated, we ensure a level of accountability that prevents deteriorated models from outstaying their welcome. If revalidation results demonstrate that a model is serving its purpose, then great. If relevant experts and stakeholders working together determine results indicate a model is not meeting expectations, then it needs remediating. This framework ensures continued accountability until remediation is complete.
Adding structure that removes negligence and simply making our products better is the right thing to do. It’s exciting to see the tremendous amount of effort and care all teams put into this process. It is a remarkable commitment.
We are proud of the products we support and continue to wake up every day eager to build trust in AI solutions.
Emily Tolentino, M.S., is Lead Data Scientist at LexisNexis® Risk Solutions. She manages the Data Science Governance and Responsible AI program.