FairPlay AI

FairPlay AI

Financial Services

Los Angeles, CA 1,588 followers

The world's first Fairness-as-a-Service company.

About us

The world's first Fairness-as-a-Service company. Our clients achieve higher profits and fairer results, with no increase in risk. Play Fair, Win Big with FairPlay.

Website
http://fairplay.ai
Industry
Financial Services
Company size
11-50 employees
Headquarters
Los Angeles, CA
Type
Privately Held
Founded
2020
Specialties
fintech, artificial intelligence, algorithmic decision-making

Locations

Employees at FairPlay AI

Updates

  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Three months ago a global banking giant engaged us to help with AI underwriting. Their journey: ➡️ Built a powerful ML model; ➡️ Struggled to explain it; ➡️ Couldn't effectively debias it. Compliance limited it to 3% of volume. Enter FairPlay. The AI Model is now: explainable, stress-tested, optimized, monitored. The result? Compliance officer: satisfied The bank is scaling the AI model from 3% to 100% of decisions It’s AI-driven underwriting, unleashed. Welcome to FairPlay.

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Imagine you want to buy a home. You hear a radio ad for a mortgage lender, but the tone feels...off-putting. The lender refers to a neighborhood you're considering as a "war zone" and describes a local grocery store as “Jungle Jewel" due to its diverse clientele. Would you be encouraged to apply for a mortgage with this lender? If you think this scenario is far-fetched, think again. This is the real-life case of Townstone Financial, a Chicago-based mortgage lender. Townstone's radio show, meant to attract clients, instead caught the attention of the Consumer Financial Protection Bureau (CFPB) for all the wrong reasons. The CFPB sued Townstone, alleging these statements (and others like them) discouraged Black applicants from seeking mortgages. But here's where it gets interesting: Initially, a district court dismissed the case, arguing that fair lending laws only protected those who had already applied for credit. The CFPB wasn't having it. They appealed, and in a surprising twist, three conservative judges on the U.S. Court of Appeals for the Seventh Circuit agreed with them. Their ruling? Marketing fairness matters. That is, fair lending obligations don’t start when you fill out an application. They start the moment a lender begins marketing. For lenders and creditors, the message is clear: Your marketing is now under the fair lending microscope. To navigate this landscape, several quantitative methods can help identify potential biases in your marketing strategies. For example Lenders should perform: ➡ Audience Composition Analysis: Assess the composition of your target audience compared to other audience benchmarks, such as the demographics of the communities your products and services serve, the demographics of your overall customer base, and/or the demographics of current users of the advertised product. ➡ Response Rate Analysis: Compare response rates across different demographic groups by channel and campaign to identify potential disparities in how different groups are engaging with your marketing efforts. ➡ Offered Terms Analysis: Compare key loan term statistics (average, minimum, maximum, standard deviation) across protected groups to uncover potential disparities in offers. ➡ Geographic Distribution: Evaluate the geographic distribution of your marketing efforts to ensure you're not excluding certain neighborhoods and communities. A fast growing use case at FairPlay is helping creditors implement analytical approaches to ensure their outreach is both effective and fair. Our tools assist in automating the testing of marketing outcomes and provide actionable insights to refine outreach strategies. Remember, in the world of lending, your marketing isn't just a megaphone—it's a welcome mat. Make sure it's rolled out fairly for everyone. https://lnkd.in/g3T6ssZK

    US appeals court gives CFPB more freedom to fight housing discrimination

    US appeals court gives CFPB more freedom to fight housing discrimination

    reuters.com

  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Three of the world’s biggest banks are FairPlay customers.  The key problems we solve for them are increasing growth, getting AI models into production, and compliance automation. Here’s how we help them achieve their goals: 🚀 Growth: Many banks aim to expand their reach into underserved segments, such as low-and-moderate income or majority-minority consumers. These banks often find that their marketing strategies aren’t eliciting enough applications from these communities or that their underwriting models aren’t adequately sensitive to these populations. At FairPlay, we optimize their marketing and underwriting models to improve targeting, increase response rates, and boost approval rates. 🤖 Getting AI into Production: Implementing machine learning models can be challenging, especially when compliance teams lack experience with AI. We equip banks with technical tools designed for non-technical personnel, enabling quick, accurate assessments of models and data sources for compliance with fair lending, model validation, and other regulatory standards. 🚨 Compliance Automation: Banks originating loans through third parties (such as fintech companies, auto dealers, and independent mortgage loan officers) need to ensure these partners do not pose risks to their institutions or customers. FairPlay’s monitoring and alerting solutions swiftly identify if third parties are inappropriately denying applications or marking up loans for protected class consumers. We also help determine if differences in outcomes are due to legitimate credit risk factors. Through these solutions, we help banks grow their top lines, realize the benefits of their AI investments, and mitigate reputational and regulatory risks. If these are the business outcomes you seek for your bank, let’s talk!

  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Did you know there's a quiet eviction crisis unfolding across America? Recent data reveals a troubling surge in evictions across major cities like Atlanta, Las Vegas, and Phoenix. According to the Eviction Lab at Princeton, evictions are up 35% to 46% compared to pre-pandemic levels in these areas. What's driving this trend? ➡ Soaring Rents: Rents for houses and apartments nationwide rose 30% in the four years spanning 2020 to 2023. Many tenants are struggling to keep up with this sharp acceleration in housing costs. ➡ Eviction Automation: Property management software has streamlined eviction filings, reducing room for negotiation or discretion from landlords. ➡ Housing Shortages: A critical lack of affordable options, especially in rapidly growing Sunbelt cities. ➡ End of Pandemic Protections: The expiration of COVID-era eviction moratoriums. Unsurprisingly, low-income renters and communities of color are disproportionately bearing the brunt of this crisis. Sharp rent increases have pushed many to the edge of affordability, exacerbating existing economic disparities and housing insecurity. To address the surge in evictions, some cities are implementing protective measures like legal representation for tenants. This may be a necessary stop-gap measure but the long-term consequences of widespread displacement remain a serious concern. While there's no single (or simple) solution to this complex issue, innovative approaches are emerging. Companies like Flex – whose compliance team is led by the incomparable Chelsea K. – are developing tools that allow renters to pay incrementally instead of in one lump sum. Many low-income households have irregular income streams, making large monthly rent payments challenging. By enabling tenants to pay in smaller, more frequent installments, they can better align rent payments with their income cycles, reducing the likelihood of missed payments and evictions. This flexibility helps tenants maintain financial stability and avoid the potential displacement associated with rent arrears. What other solutions can we explore to tackle America's growing eviction crisis? Share your thoughts in the comments below!

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Will New State Laws Make Your Insurance Fairer? State AI legislation is rapidly evolving, and its impact on the insurance industry is becoming increasingly significant. Marla H. Kanemitsu and Manu Sharma at Tusk Venture Partners conducted an in-depth analysis of 190 proposed state bills that touch on AI and insurance and identified four main categories: ➡ AI Discrimination: These bills, introduced in about a dozen states, address the use of AI in making high-stakes, “consequential” decisions that could result in discriminatory outcomes. These bills often require audits or impact assessments of AI systems prior to production. Colorado has passed such legislation, Connecticut came close, and California will likely follow suit. Illinois and Virginia are also considering similar bills. ➡ Utilization Review: These bills focus on the process by which insurers determine coverage for medical care. They generally require disclosures about the use of AI in the utilization review process and may restrict the use of AI by health insurers for that purpose. Some utilization review bills would require insurers to provide their algorithms and data sets to State Departments of Insurance for compliance assessments. Although none of these bills passed this year, they are expected to be reintroduced in future sessions. ➡ Study Committees: AI-focused study committees aim to understand AI better and indicate growing legislative interest in proactive oversight of AI systems. While often seen as a legislative graveyard, these committees will likely study first movers like Colorado and New York to see if their approaches can serve as models for other states. ➡ Miscellaneous AI Restrictions: These include various other restrictions on AI use in insurance applications, such as in underwriting or claims management. Additionally, Marla and Manu point out that the National Association of Insurance Commissioners (NAIC) has created a Task Force on Third Party Models and Data, chaired by Colorado insurance commissioner, Michael Conway. Given Colorado's proactive and highly prescriptive stance on AI regulation, this chairmanship is particularly significant. On a recent call, the Task Force Chair said he aims to develop a framework to help states ensure that third-party data and models used to make decisions about consumers in insurance are fair. My Key Takeaways: State legislators are under significant pressure to protect the public from potential AI abuses. This will likely result in more AI-driven decisions being scrutinized for discrimination and more companies facing nondiscrimination obligations, not just in insurance, but in other domains too.

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    I’m in Washington DC today giving a talk on how algorithmic debiasing solutions can safely increase mortgage approval rates for Black and Hispanic Americans at the National Fair Housing Alliance (NFHA) annual meeting. The conference began with a powerful discussion between Bernice A. King, daughter of MLK, and NFHA CEO Lisa Rice. They explored the intersection of housing, civil liberties, and technological advancement, highlighting how access to affordable homes is essential for upward mobility and overall well-being. In light of recent events, Dr. King shared her thoughts on MLK’s legacy for America today. I’ll relay her message in her own words: “My father believed in the power of non-violence. Non-violence achieved the Civil Rights Act, the Voting Rights Act, the Fair Housing Act. Non-violence keeps reminding you: we’re inter-related, we’re inter-connected, and by looking out for others, you are actually looking out for yourself. We need to be in the business of creating, not destroying. There are previous generations who dealt with dark situations, but there’s always been a critical mass of people who carry the light forward.” Energized by this call to action, I’m determined to carry Dr. King’s message into my talk – advocating for a future where algorithms empower, not oppress, and striving to honor MLK’s vision in the age of AI.

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Insurers in New York have just received a gift: clarity. Yesterday, the New York State Department of Financial Services' Superintendent issued a groundbreaking circular. This guidance, which the FairPlay team is proud to have collaborated on, provides a clear roadmap for insurers seeking to leverage AI and alternative data responsibly in high-stakes applications. In a nutshell, NY is saying you can use AI and alternative data in insurance underwriting and pricing but with essential safeguards: 🔹 Proxy Detection: Identify and mitigate potential biases in your data. 🔹 Fairness Testing: Ensure your models are fair and unbiased. 🔹 Less Discriminatory Alternatives: Seek alternative algorithms with lower bias if possible 🔹 Model Governance & Documentation: Maintain robust oversight and documentation. This is entirely achievable. I know because FairPlay customers, including some of the world’s largest banks, do it every day. Our fairness-as-a-service solutions were purpose-built for this regulatory regime. Carriers who swiftly adopt AI and alternative data with appropriate bias detection, fairness optimization, and governance will gain a competitive edge in the NY market and outperform their peers. If that aligns with your strategy, give us a call!

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    What do chest X-rays have to do with discrimination? A lot, it turns out. And it’s quite telling about how unintentional biases arise in data driven systems. Let me explain. Researchers built a model to predict disease using X-Rays. The model outperformed radiologists—but it was also super biased, significantly underdiagnosing black and female patients for cardiomegaly (an enlarged heart). The researchers then used the same data inputs to build new models aimed at predicting protected status (race, gender, age). The goal was to find out if the model input data was encoding information about sensitive attributes—and it was. This algorithmic encoding of sensitive attributes was resulting in unfair outcomes: underdiagnoses of disease for female and black patients. The researchers then applied some debiasing methodologies which generated fairer alternative algorithms “without losing notable overall performance for disease prediction.” Eureka! Right? Except that the researchers encountered two problems: 1️⃣ Getting the model to optimize for accuracy and fairness caused other key clinical metrics—like calibration error—to deteriorate. So while there wasn’t a meaningful fairness-accuracy tradeoff, de-biasing introduced tradeoffs with other metrics that mattered. 2️⃣ The fairness gains did not persist when the distribution of the patients shifted – that is, when the make-up of the patient population changed. The Researchers write: “Our findings underscore the necessity for regular evaluation of model performance under distribution shift, challenging the popular opinion of a single fair model across different settings.” They also suggest practitioners trying to achieve fairer models should focus on building algorithms that are “robust to arbitrary domain shifts.” This insight is one of the animating ideas underlying the FairPlay Framework for Picking a Fairer Model, which we launched a few months ago: we offered a method for forecasting performance and fairness outcomes under many different operating conditions that could occur, because applicant mixtures, credit policies and the world in general can all change on a dime. Contact me to learn more about building your models to be robust to distribution shifts and how FairPlay can help!

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    I was recently asked on a podcast: What AI risks keep you up at night? My answer: how much time have you got? I keep an eye on the AI incident tracker, a database of AIs gone off the rails—it’s enough to give anyone trouble sleeping.  (Link 👇) As you read this, there are AI harms befalling people. These include: 🔹 Facial recognition where AI systems are incorrectly identifying consumers in stores as shoplifters or students taking exams as cheaters; 🔹 Deep fakes being made that depict sexually explicit images of minors; 🔹 Political dirty tricksters trying to spread misinformation; But what truly keeps me up at night are the insidious, under-the-radar AI harms that profoundly impact people's lives in subtle yet significant ways without drawing attention. For example: ▶ In credit scoring: AI algorithms can perpetuate or exacerbate existing biases in financial data, leading to unfairly low credit scores for certain groups of people, particularly marginalized communities. ▶ In tenant screening: AI systems used by landlords and property management companies might unfairly disqualify potential tenants, affecting their ability to secure housing. ▶ In employment: AI-driven recruiting tools can inadvertently filter out qualified job candidates, affecting employment for many individuals. ▶ In healthcare: AI applications might not consider the nuances of diverse patient populations, potentially leading to misdiagnoses and unequal access to treatment. AI is a mega-trend that’s here to stay and it has the potential to do amazing things. But there are “quiet” AI harms happening all around us today. While public discourse often focuses on big existential risks or widespread job displacement, we have to make sure these doomsday scenarios aren’t distracting from the immediate, real-world impacts of AI that are shaping people's lives right now—often disproportionately at the expense of marginalized groups.

    • No alternative text description for this image
  • FairPlay AI reposted this

    View profile for Kareem Saleh, graphic

    Founder & CEO at FairPlay | 10+ Years of Applying AI to Financial Services | Architect of $3B+ in Financing Facilities for the World's Underserved

    Thank you, Brandon Milhorn, Sebastien Monnet and your awesome team at the Conference of State Bank Supervisors (CSBS) for hosting me yesterday to discuss the use of AI in financial services. It was so good to be with old and new friends like Avy Mallik and Konrad Alt. My key takeaway: eventually, bank supervisors will have their own bots—AI for enhanced compliance monitoring and machine learning tools to assess internal controls. The future of bank supervision will be automated, intelligent, and more efficient than ever.

    CSBS President and CEO Brandon Milhorn sat down with CSBS's MaryBeth Quist and Brennan Zubrick yesterday at the CSBS Summer Regulatory Summit to discuss strengthening the state system through regulatory and supervisory technology. The CSBS Summer Regulatory Summit is a great networking event for senior state officials addressing issues in policy, legislation, and bank and non-bank supervisory and regulatory matters.

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

FairPlay AI 3 total rounds

Last Round

Series A

US$ 10.0M

See more info on crunchbase