![impact client github impact client github](https://whitepaper.silicon.de/wp-content/uploads/2020/05/German-GitHub-Infographic-FINAL.png)
And such explanatory results can be accessed by the Disparate Impact Analysis and Sensitivity Analysis(SA) features/tools. This instability of explanations is a driving factor behind the presentation of multiple explanatory results in Driverless AI, enabling users to find explanatory information that is consistent across multiple modeling and interpretation techniques. This alone is an obstacle to interpretation, but when using these types of tools as interpretation tools or with interpretation tools, it is important to remember that details of explanations can change across multiple accurate models. It is well understood that for the same set of input features and prediction targets, complex machine learning algorithms can produce multiple accurate models with very similar, but not the same, internal architectures: the multiplicity of good models. Right after, we will use the following two Driverless AI tools to analyze and check for fairness.Īs a matter of speaking, the above two features provide a solution to a common problem in ML: the multiplicity of good models. Hence, in this self-paced course, we will build an AI model predicting whether someone will be defaulting on their next credit card payment. Data science practitioners and firms deploying AI in production want to ‘get under the hood' of their models to see what impacts decisions. A lack of understanding of an ML Model's ins of production can lead to legal and financial risks when discovering that the ML model in production discriminates (bias) against certain ethnicities, genders, etc.Īdditionally, as firms have looked to leverage AI to make more and more decisions for the company, the discussion of Human-Centered Machine learning has become increasingly important. Further, they are also required to understand whether the ML models are not negatively impacting protected classes of customers or unfairly weighting for these types of classes.
![impact client github impact client github](https://user-images.githubusercontent.com/26198477/77959091-adfed580-72f3-11ea-8a73-2645c8c9402e.png)
As firms use Machine Learning(ML) to help them around credit/loan-decisions, cross-sell promotions, and determine the next best action, they must know how certain customer features are being weighed into the ML models in production.