top of page

Product managers: Ethics and AI

"First, do no harm."


The European Union has proposed a legislative framework for regulating the development and deployment of AI within the EU. The European Parliament plans to vote on adopting the regulation by the end of 2023.


In April 2023, tech leaders including Elon Musk and Steve Wozniak signed an open letter published by the Future of Life Institute calling for a six-month moratorium on all training of AI systems more powerful than GPT-4. Former Microsoft CEO Bill Gates responded by insisting it would not solve the challenges ahead. Computer scientist Andrew Ng, founder of Google Brain, called the moratorium “a terrible idea” because government intervention would be the only possible way to enforce it.


Honestly, I don't want governments involved in controlling work on AI innovations—or forbidding what they don't understand. After all, these are the folks who can barely use their phones and don’t know the difference between Facebook and TikTok.


Steven J. Vaughan-Nichols in There's no stopping AI Now:

By and large, our elected leaders don't have a clue about technology. So for better or worse, we, the big tech companies, and we, the business users of generative AI, will be the ones calling the shots.


I’m reminded of the Three Laws of Robotics devised in 1942 by prolific science author Isaac Asimov. They are:

  • First Law. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  • Second Law. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  • Third Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov also added a fourth, or zeroth law, to precede the others:

  • Zeroth Law. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

I wonder if governments or scientists or company leaders have read these laws—or even know about them.


Tristan Harris and Aza Raskin of the Center for Humane Technology remind us:

When you invent a new technology, you uncover a new class of responsibilities. If that [new] technology confers power, it starts a race. And if you do not coordinate, the race ends in tragedy.


And they are really just as simple as, “First, do no harm.”


Not “do no harm… unless it generates clicks.”

Not “do no harm… unless it generates revenue.”

Not “do no harm… unless it generates more visibility.”


While waiting for politicians and company leaders to catch up, product managers can take the lead. Brainstorm both good and bad scenarios for products and features. How could they be used and misused? How might they impact personal privacy and security? I’m sure (or I hope!) those who created social media platforms did not anticipate how they would be used to violate personal privacy and spread hate.


Product teams should be vigilant in considering the negative impacts of their strategic decisions.


What’s the harm in AI-generated results?


Let’s ask ChatPGT.


AI-generated results can have harmful consequences if they are biased, inaccurate, or used inappropriately. Here are a few examples:

  • Bias: If the AI model is trained on biased data or programmed with biased algorithms, it can produce biased results that may disproportionately affect certain groups of people. For example, an AI model used in the hiring process may inadvertently discriminate against candidates based on race or gender.

  • Inaccuracy: AI-generated results may not always be accurate, especially if the model has not been trained on enough data or the data it was trained on is not representative of the real-world. Inaccurate results can have serious consequences in fields such as healthcare, where a misdiagnosis or incorrect treatment recommendation could be life-threatening.

  • Inappropriate use: AI-generated results can be misused if they are used for purposes they were not intended for. For example, an AI model used to predict criminal behavior may be used to unfairly target certain individuals or groups.

  • Lack of transparency: AI-generated results can be difficult to interpret or understand, especially if the model is a "black box" that does not provide any explanation for its decisions. This lack of transparency can make it difficult to identify and correct any errors or biases in the model.

Overall, it's important to recognize that AI-generated results are not infallible and should be used with caution. It's important to consider the potential harms and biases of AI-generated results and to ensure that they are used ethically and responsibly.


But what about copyright violations and content appropriation?


Those of us who write about product strategy, planning, and growth hope you’ll come to our website for other insightful articles and videos. Many websites could become a wasteland if their content is absorbed into the borg of AI search.


Here's an easy fix that wouldn’t involve governments: AI search teams could voluntarily provide citations to their sources. (By the way, you can ask, “Please provide sources for the previous answer” or just “share citations.”)


Back in the day, Wikipedia let anyone write anything. Then the librarians got involved. They deleted most of the nonsense pages and required citations for statements.


Consider the ethical implications of your product’s features.


Transparency. Your software should be transparent about how it works, what data it collects, and how it uses that data. Users should have a clear understanding of what they agree to when they use your software.


Privacy. Users have a right to privacy, and you should protect their data. This means being transparent about what data is being collected, how it might be used, and giving users control over their data.


Accessibility. Your software should be accessible to all users, regardless of their abilities or disabilities. This means designing features that are easy to use and navigate, and providing alternative ways for users to interact with your software if needed.


Safety. Your software should not harm users or others. This means ensuring that your software is secure and cannot be used to harm or deceive users.


As a product manager, you have a responsibility to your users and the wider community to deliver ethical software that contributes positively to society. That includes considering how the feature could be misused to spread bias or hate.


This is an exciting time. Generative AI tools are making many lives easier.


But first, do no harm.



See also The A.I. Dilemma with Tristan Harris and Aza Raskin who discuss how existing A.I. capabilities already pose catastrophic risks to a functional society.

bottom of page