This rate environment is different and every decision matters. Are your models up to the task?
People generally don’t get too excited about model validation – it’s akin to taking one’s car to the mechanic. However, would you think it is a good idea to never bring your car in for a tune-up? Of course you wouldn’t. In fact, you would most likely bring your car to the best mechanic you can find: someone you can trust to find anything that’s wrong, make it run better, and give you the confidence it will get you where you need to be. And the same should hold true for your key risk models and getting the right validation performed.
Regulatory Perspective
The regulatory approach to model validation has continued to grow and evolve over the past decade-plus. Model risk management guidance has expanded in practice since it was first published in 2011, with horizontal review from examiners and validators resulting in a trickle-down of best practices across the industry (the FDIC adopted the model risk management guidance in 2017 and the OCC further expanded its oversight with the publication of the Comptroller’s Handbook on Model Risk Management in 2021). Broader risk management initiatives have also been adding new emphasis, including on enterprise risk management and operational risk management. Meanwhile, data, technology, model usage, and complexity have all been developing.
Therefore, while previous regulatory attention had primarily been focused on large and mid-sized institutions, expectations for community banks and credit unions are now ramping up, with several key areas of focus:
Data and assumption management, including data sufficiency and reliability, assumption support, and sensitivity testing
Governance, with attention paid to documentation, change control, and risk limits
Ongoing performance monitoring
Alignment with an institution’s model risk management framework
Identification of sub-model use
Validation
With that focus in mind, regulatory attention paid to model risk management has become more prevalent. To enhance your model risk management framework, focus on some of these areas the regulators have noted as concerns in recent exams:
Model inventory: inventories are found to be incomplete with no updating process in place
Effective challenge: validations do not demonstrate sufficient rigor for risk models
Model validations: not all models were validated, and in some cases the validators lacked the requisite independence to adequately ensure a model was performing as intended
Model documentation: model documentation was incomplete or out of date, and/or lacking detail on assumptions support, model development, and enhancements as well as overlays / overrides
Validation Scope and Rigor
One fact has become increasingly clear in the current environment: a traditional “audit-like” validation is not sufficient to meet regulatory expectations. Whereas audit may simply pose a yes-or-no question as to whether the institution is meeting a requirement, a validation digs deeper to determine whether policies, processes, and procedures are sufficient given the size and complexity of the institution. A solid validation will take into account:
Governance and process framework
Policies
Model purpose and use, architecture, and limitations
Core methodologies
Documentation
Model input – data and assumptions
Data sources and sufficiency
Data management, quality, and reliability
Assumptions: accuracy, reasonableness, and support & sensitivity testing
Model processing – underlying theory, analytics, and mathematics
Modeling system
Model mathematics and formulae
Selection of model drivers
Model report component
Reporting: content, frequency, distribution, and effectiveness
In reviewing these aspects, a simple “check-the-box” validation falls short. The goal of a validation should not be to just find errors, but rather to combine with effective challenge to help uncover model deficiencies and strengthen the process.
Effective challenge
Regulatory guidance on model risk management defines effective challenge as “critical analysis by objective, informed parties that can identify model limitations and produce appropriate checks. Effective challenge depends on a combination of incentives, competence, and influence.”
Incentives are better when there is greater separation of challenge from the model development process, when compensation practices are well-designed, and when effective challenge is part of the corporate culture.
Competence means the right technical knowledge and skills are in place to provide appropriate analysis and critique.
Influence involves explicit authority, stature, and high-level commitment to ensure actions are taken to address any modeling issues.
For the most part, effective challenge is inconsistent and limited throughout the industry. The best validations ask tough questions, always taking care to thoroughly document the process and learnings.
Data Management
Effective modeling starts with quality data, but data shortcomings can undermine model reliability and user confidence. Data management best practices shared by high performers include:
Taking the time to understand and document all aspects of the data including source/type/location, scrubbing, reconciliation, management, and security
Documenting all data exploration and testing processes
Differentiating input subject to change control v. normal operations
Testing data aggregation impact on model results
Leveraging centralized data initiatives as much as possible (many models utilize the same data; having a central location for model owners to maintain data that has been validated and to ensure continuity across models is helpful and often overlooked)
Assumptions Management
“Close enough” is not “good enough” when it comes to model assumptions. Assumptions must be supported both qualitatively and quantitatively, and be specific to your institution. Further, they should also be evaluated and updated regularly, including reviews at ALCO with active participation by individual business lines. Documentation, ongoing monitoring, sensitivity testing, and stress testing are all critical components that should be included in assumptions development and support.
When managing assumptions, some of the important questions to be able to answer should include:
What are our key assumptions?
Are these assumptions based on actual experience? Peer analysis? Expert judgment?
How much can these assumptions vary?
Who is reviewing and approving the assumptions?
When is it necessary to revisit and possibly recalibrate our assumptions?
Deposit Assumptions
Among the most important (and impactful) assumptions for an institution are those that relate to deposits. Yet while they have the greatest potential impact, deposit assumptions are also notoriously the hardest to validate. Assumptions related to factors such as rate sensitivity, core v. “hot” money, and retention/decay on non-maturity deposits are key to an institution’s success. Therefore they should be substantiated with activity tracking and deposit studies and analyses making sure to include sensitivity and stress testing.
High-performing institutions understand the importance of defensible assumptions, and incorporate these practices to ensure reliability:
Document how assumptions are developed, supported, reviewed, and approved
Describe the sensitivity testing process, including which tests were performed and how results are communicated
Formalize the review and approval process for key assumptions
Ensure clarity with respect to overrides and overlays
Leverage a centralized assumption management process, if possible
Ongoing Monitoring
Ongoing monitoring helps confirm that a model is implemented appropriately and used as intended. Changes in products, exposures, activities, clients, or market conditions can all necessitate possible adjustments to – or even replacement of – a current model. Types of ongoing monitoring include:
Data testing
Reconciliation of inputs and outputs
Back-testing
Assumption back-testing and sensitivity testing
Tracking and adjusting open model risk management findings
Challenger modeling and benchmarking results against other models/sources
An important piece of this is also determining who in the organization is responsible for ongoing monitoring. In many cases, the answer will be the model owner, who usually has the best understanding of the model and access to the needed data and tools. Model risk management personnel and vendors can also help contribute to monitoring.
Ongoing monitoring provides confidence that models work as intended. The most effective processes include:
Developing a formalized process to evaluate data, assumptions, and output
Automating the ongoing monitoring process, where possible
Documenting all ongoing monitoring procedures, including defining all tests/analyses, establishing thresholds, and documenting actions to be taken in case of a breach
Governance
Model governance framework goes beyond just a process and controls or “change control” document. It is important to also consider reconciliation, third-party vendor risk, and policies/risk limits.
Model documentation is a critical component to model risk management success. It should be considered an evolving document that:
Summarizes the model’s purpose, use, risk rating, limitations, etc.
Provides corporate memory
Serves as a blueprint for validation
Defines ongoing performance monitoring (OPM) tests and thresholds
Allows model risk management to look at models horizontally to identify common data and assumptions, as well as model interdependencies
When implementing an effective model risk management framework, ensure model governance:
Documents all governance-related aspects of the model and its process
Includes all roles and responsibilities throughout the organization
Provides details on change control
Includes testing procedures
Model Purpose and Use
The right validation makes sure that a model has not been implemented to do something it wasn’t designed to do. To confirm that models serve their intended purpose and have not been broadened beyond their original abilities, top institutions:
Clearly distinguish the model’s purpose and use
Document the theory behind the model
Define the mathematical construct of the model
Outline all data and assumption considerations
Reference any regulatory or accounting guidance used
Purpose of the Validation
When you consider all of these factors, what is the ultimate purpose behind your model validations? Why is it important to treat validation as more than just a “check-the-box” exercise?
Quite simply, validation should be seen as a key component of strategy development for the institution. It creates clarity around important balance sheet management topics such as interest rate risk, liquidity, capital/earnings, and credit. Absent a robust validation process, misinformation, “over-information,” or “under-information” can lead to incorrect and/or sub-optimal decisions.
Validations that demonstrate effective challenge, apply adequate rigor, and go beyond “checking the box” increase confidence throughout the risk management process. Improving model inputs results in better model performance – and better model performance leads to greater reliability in model outputs. When key stakeholders have greater confidence in the results models provide, it leads to better strategic discussion and decision-making, which makes a critical difference, particularly in today’s challenging environment.
ABOUT THE AUTHOR
Mark Haberland is a Managing Director at Darling Consulting Group. Mark has over 25 years providing balance sheet and model risk management education and consulting to the community and mid-size banking space. A frequent author and top-rated speaker on a wide array of risk management topics, Mark facilitates educational programs and workshops for numerous financial institutions, industry and state trade associations, and regulatory agencies.
Contact Mark Haberland: mhaberland@darlingconsulting.com or 508-237-2473 to learn more about DCG's approach to model validations (including CECL) and Model Risk Management.
© 2023 Darling Consulting Group, Inc.
Comentarios