Navigating Realities: Insights on AI Deployments from Author Sol Rashidi

Navigating Realities: Insights on AI Deployments from Author Sol Rashidi

During this interview, Sol Rashidi talks about the ins and outs of deploying AI and the common problems that companies run into as they try to integrate AI. With her many years of experience leading AI projects, Rashidi stresses how important it is to deal with the human part, which is often the biggest problem. She stresses how important good leadership and change management are in getting past these problems, like dealing with different types of people on the team and pushback and doubt. This honest account from Rashidi shows the truth about using AI, with useful lessons learned from both successes and failures. This new point of view is presented in her well-known book, “Your AI Survival Guide.”

In “Your AI Survival Guide,” you discuss real-world AI deployments. Could you share some of the most common challenges organizations face when implementing AI projects?

I’ve had the fortunate opportunity to conceptualize, develop and push many AI products in production and while the technical challenges vary slightly, all of them, without a doubt have the same single challenge in common, the people.

I find that people can kill a project before it even starts.

When you’re in a position to build a team from scratch, you look for talent, skillset, mindset, grit, and resilience. That’s easy. It may take longer to build the team, but eventually you build the team you need.

However, when you’re in a situation where you inherit a team or are forced to work with a cross-functional group of individuals from varying teams, backgrounds, and dispositions towards AI, the technical complications are minor in comparison. Dealing with the naysayers, the pessimists, the know-it-alls, and those that value job preservation versus evolution make the situation very challenging. Teammates who are complacent and view AI as a threat versus an opportunity can really create project drag, impact morale, and make the job of the lead very difficult. As such, it’s take a lot more effort, patience, time, and fortitude to push past these challenges, and the multiplier effect is often 3x. Something that should take you 4 months, takes 12 months and you must be mentally prepared for that.

A second common challenge I see is a lack of investment in change management. While change management is often spoke of, talked about, and on the agenda, it never gets the investment or focus it needs during and after an AI deployment. Basic pillars such as, updating the jobs to be done, modifying operational processes, organizational structural changes to reach scale and adoption are skipped or more often, short changed. There’s a mentality that AI is ‘plug and play’when in actuality there’s tremendous work to be done with organizational alignment and modifications.

What inspired you to write a book specifically focused on navigating the complexities of AI deployments?

I’ve read more than my fair share on AI, and what I noticed was we have a lot of great technical resources. We have a lot of credible academic resources. We have a lot of wonderful research resources. And we have access to a lot of management consulting strategy resources on the possibilities and opportunities of AI. But none of the resources go into any level of detail as to where things go wrong, why 98% of projects fail, and where the hype gets generated.

It’s almost as if we’re allergic to talking about failures and lessons learned, we just want the Instagram snapshot of the wonderful marvels of AI.

The reality is deployments at scale are hard and I felt there were no resources that talked about the realities of doing the work. So, I wanted to write a book that kept it real, and shares where mistakes are often made, where assumptions become invalid, along with real stories and anecdotes of my own personal mistakes and experiences.

What role does data quality and availability play in the success of AI deployments, and how can organizations address data-related challenges?

You can’t do AI without Data, that much is true! While we like to say data is not of the right quality, nor is it accessible, the reality is, it never was and never will be. The proliferation of data, and the volume, variety, and velocity at which its generated makes it a zero-sum game for data teams. Yet, we still find ways to make it work. Enterprises develop their global strategies based on the data they have available to them. Scale-ups develop their go-to-market and commercialization approaches based on the data they have available to them. While the state of data will never be perfect, you make do with what you have.

That’s why when it comes to AI projects, you have to be creative and take a different approach than let’s say operational-based projects. Not all data is created equal, and you have to know which is what. We have data sets that are poor in hygiene, yes, but we have some that are good enough. We have data that is locked up and unavailable, yes, but we have others that are accessible. You must tier your data sets and data domains and tag them as Good, Good Enough, Fine, Poor and focus your use cases on the Good and Good Enough categories.

Data shouldn’t deter you from starting, but your use cases should be determined based on the data that is somewhat good and accessible. There’s no point in picking a use case where data becomes your uphill battle, because you’re no longer deploying an AI project but a Data project.

What ethical considerations should organizations keep in mind when deploying AI systems, and how can they ensure responsible and ethical use of AI technologies?

This is a BIG topic and one that deserves its own time and space. However, the one non-negotiable that I think all companies need to consider is having a ‘Human in the Loop’. Its needs to be standard practice and not best practice.

The fact is, we’re all aware that machines aren’t perfect. What we overlook however is that we’re very quick to point out errors in AI models, yet we don’t measure errors in the human model. We don’t do a good job of calculating human miscalculations and misjudgments, but we instantly recognize a machines. Which is why when it comes to deploying AI systems and embracing responsible and ethical practices for its incorrections, it’s imperative a human is always in the loop. No result, outcome, objective, or recommendation should be made available internally or externally without a human reviewing the work and making sure it meets the MAT (Minimum Acceptability Threshold), a metric I created back in my Watson days for my clients. MAT is the minimum acceptability criteria to determine if something is ready for prime time and can be pushed to production and released for internal employees or external clients use. Does it meet your threshold for accuracy? Does it meet your threshold for risk exposure? Can you rebound in the rare case the results are off? That is what MAT is about. You want to be responsible and ethical, apply common sense and make sure the outcomes of your AI project align with your values and passes your MAT criteria.

Navigating Realities: Insights on AI Deployments from Author Sol Rashidi

How do you recommend organizations measure the ROI (Return on Investment) of their AI deployments, and what metrics should they consider?

I think there are three (3) ways to measure ROI when it comes to AI deployments.

We have:

  • Financial ROI: The standard practice in determining if the returns are greater than the cost over a course of time.
  • Cultural ROI: The cost to culture if new advents and innovations aren’t explored and the cost to the company for not continually evolving.
  • Relevancy ROI: The cost to your longevity and relevance in the marketplace if your company is not consistently growing, exploring, innovating, and evolving.

Often, we focus on the first benchmark, Financial ROI, but with AI deployments, there are intangibles that need to be measured too in order to understand the true implications if not explored.

What are some common misconceptions or myths about AI deployments that you address in your book, and what are the realities behind them?

The number one misconception about AI deployments is that they are ‘plug & play’. They are not. Not even for off-the-shelf solutions. Our marketing has gotten so advanced and creative that we set the tone and perception for individuals and companies that ‘if you buy this, you will get that’ in minutes. When in reality so much more goes into it. You have to:

  • Access where that advent is relevant in your personal life or organization.
  • Determine what needs to change in your current processes and behaviors to fully adopt the capability.
  • Understand there’s a learning curve to overcome, you must create space and time to be a student of the capability in order to leverage all its strengths.
  • Identify where it’s worth deploying, and where it’s not.
  • Experiment through trial and error.
  • Determine what organizational or personal changes have to take place to embrace this capability and make it a habit.

In your personal life, everything above is true. For organization, there’s a multiplier of 50x and a rigorous change management plan must be put into place if you want your deployment to scale and be adopted by the masses. Even the simplest of solutions requires change management.

Download Sol Rashidi’s latest book “Your AI Survival Guide: Scraped Knees, Bruised Elbows, and Lessons Learned from Real-World AI Deployments”

.