From performing data analyses, to summarizing complex research findings, to analyzing patterns across millions of images, AI today is seemingly limitless in its powers and possibilities. While just a few years ago many were asking, “what can AI do?,” it may now be more appropriate to ask, “what can’t AI do?” That’s one of the questions tackled in CodeSignal’s recent webinar on The AI Revolution in Tech.
Brenna Lenoir, SVP of Marketing at CodeSignal, held a fireside chat with Cassie Kozyrkov, Google’s first Chief Decision Scientist, about the transformative role of AI in the tech industry.
Known for founding the field of Decision Intelligence at Google, Kozyrkov now is Founder and CEO of a stealth AI startup and runs Data Scientific, an elite agency that helps world leaders and chief executives optimize their biggest decisions.
In this post, we’ll revisit some of the core themes from their conversation, including Kozyrkov’s advice on how business leaders today should approach AI, key AI literacy skills for leaders, and how AI is shaping the future of business. A recording of the fireside chat is available for viewing here.
Cassie Kozyrkov on AI and business leadership: 6 key takeaways
1. Don’t adopt AI for the sake of adopting AI. Kozyrkov warned business leaders about adopting AI solutions without first determining that AI is needed to solve a business problem. When leaders are thinking about adopting AI, Kozyrkov suggests, the best questions to start with are: ”What are my business objectives? And how can AI help me meet those?”
When using automating processes, for instance, AI should only be used when traditional automation methods, which give engineers greater control, aren’t up to the task. Adopting AI should be an “act of desperation,” Kozyrkov explains, that leaders opt for when tasks are too complex for traditional automation
2. Domain knowledge (still) matters. Delegating your decision-making entirely to AI isn’t feasible—or advisable, says Kozyrkov. To make sense of your data and employ AI in ways that align with your business objectives, you need people who know the context of your business and who can evaluate the quality of the data. Poor quality data will lead to poor quality AI outputs.
3. There are 3 questions leaders need to ask themselves when adopting AI. Also known as the “Kozyr criteria,” these are 3 questions that Kozyrkov says will help leaders understand how AI works in a way that matters for their business. They are:
- What is the objective of the AI system? Knowing what the system is optimized for, and what “success” looks like, is the first step to understanding how it works.
- What data set are you going to use? AI systems are built using huge collections of data, from which they learn to identify patterns and connections. Understanding the quality of the data and where it came from is the next step toward understanding an AI system.
- How will you test it? This is where the domain expertise comes into play. It’s important to have people on your team who can critically evaluate the outputs of your AI system and ensure that it’s working as it’s meant to.
4. AI presents new challenges for managing talent. As AI opens up a new world of possibilities for automation, many of the manual tasks that make up employees’ day-to-day activities may soon become obsolete. A key challenge for leaders will be figuring out how to manage talent and measure employees’ value on the basis of the non-automatable thinking work they do.
The solution, Kozyrkov explains, cannot be simply asking employees to do 40 hours of thinking a week once all their other tasks are automated—humans can’t spend that much time just thinking. How leaders will manage and measure work products is a pressing and complex problem to be solved.
5. The core AI literacy skills that leaders need aren’t technical. In fact, Kozyrkov says, leaders really don’t need to get into the weeds of how AI systems work technically at all. Instead, it’s much more important that they understand the objectives of the system (the Kozyr criteria outlined above), and hone what she calls “hard-to-automate skills.” These include decision-making skills, creativity, social skills, trust, and collaboration.
6. Moving forward, the tech industry needs more nuance in its approach to AI. AI’s power to automate processes, personalize outputs, and make sense of data in ways never before possible can be tantalizing for tech leaders—it’s easy to want to adopt AI-powered solutions for every problem. But, leaders need to think carefully about the problems they’re solving and how important it is to get it right when adopting AI.
Think of the implications of an AI system that analyzes brain tumors, Kozyrkov suggests. What are the ethical implications if the system skews toward misidentifying benign tumors as malignant? Or misidentifying malignant tumors as benign? These are the types of nuanced, high-stakes ethical questions leaders need to be thinking about as they integrate AI into their business decision-making.
Take the next step
The AI future is already here. Want to dig deeper into the technical skills needed to use and even build AI systems? Master these skills with CodeSignal’s learning paths in business-critical AI and ML skills. Sign up to get started for free.