Crowdbotics Logo

About Us arrow

We are on a mission to radically transform the software development lifecycle.

Home Blog ...

Announcements App Development CodeOps Product Development

Why Adopting AI is a Moral Imperative

The world has endless moral quandaries, and AI is just the latest. Should we adopt it, and in what capacity? How can it contribute to safety and security? 

19 December 2024

by Cory Hymel

By Cory Hymel, VP of Innovation and Research at Crowdbotics

The world has endless moral quandaries, and AI is just the latest. Should we adopt it, and in what capacity? How can it contribute to safety and security? 

Frank Chen asked a question five years ago, when self-driving car technology became feasible: “If self-driving cars are 51% safer, are we not morally obligated to adopt them?” This question has repeatedly sparked debate, and every person who hears it instantly thinks they know the answer one way or the other. The knife’s edge makes this question so interesting­—51% is hardly a runaway statistic. 

However, let’s do the math. The National Highway Safety Administration reported that in 2022, there were an estimated 42,795 traffic fatalities. One percent of that number, the difference that self-driving cars would hypothetically make, is about 427 people. That number would fill around 1.5 Boeing 777 airplanes. 

Are 427 lives enough to qualify as a moral obligation to adopt AI? 

Though there is no simple answer, and the debate is not open and shut, the answer is yes. The opportunity to save lives, enhance the quality of life, and address long-standing inequality is significant and incontrovertible. That doesn’t mean the argument isn’t complicated, and the technology is not done developing. 

Adopting AI or any tech too hastily can lead to many issues, making this debate hotly contested even as the technology improves. However, avoiding technology that could save lives may be just as ethically problematic as adopting it before it is fully developed. 

The Choice at Hand
There is a lot of anxiety around adopting AI technology, and for good reason. We see concern in various industries where AI could replace people in jobs, and people will always be hesitant to accept removing the human element from any kind of work. 

This is not the place to start digging into the eternally hazy question of what it means to be “human,” but this is part of the debate. However, adopting AI doesn’t mean we have to throw out the human input; if used properly, it just means accepting that there are things that AI can do better and not taking it personally that people will be inferior to robots at specific tasks. 

The argument, then, boils down to deciding if AI’s efficiencies, safety and quality-of-life benefits can outweigh the disruption of life and industry as we know it. 

Human beings as a species are notably hesitant to accept change. Our cognitive biases, informed by our evolution, are habits our minds fall back on that were helpful when we were running from predators. However, they only sometimes reflect our perception of the modern world well. 

Understanding what biases might be at play is essential to get a solid, holistic sense of the argument. While they don’t negate people’s fears and anxieties, knowing why we feel the way we feel can be helpful. Here are some that might influence our thinking when it comes to trusting AI: 

These biases are compounded by the fact that the technology is not universally accessible. Though it is baked into most of our daily lives in some form or another, the true potential of AI is barred by a pretty significant gap in privilege. Without a push to ensure that the technology gets democratized, many of the benefits of AI remain hypothetical to the average person. 

Without tangible proof of concept, these mindsets can abruptly halt innovation. That is not to say that human brains are foolish for being so hesitant to trust new technology, but it is important to be aware of why we feel certain ways so we can attempt to develop a more objective view.

Practical Example: Self-Driving Cars
Let’s alleviate some of these biases by looking at Frank’s concrete example. Self-driving cars and the technology they operate on create a microcosm of the larger discourse, one with digestible statistics to back it up. 

Start with the numbers: studies show that human drivers have a higher rate of crashes with meaningful risk of injury than autonomous vehicles. Human drivers caused 0.24 injuries per million miles (IPMM) and 0.01 fatalities per million miles (FPMM), while self-driving cars caused 0.06 IPMM and 0 FPMM. 

Remember the number 427? These are not just statistics; they represent human lives that could be saved with AI technology. The moral argument for vehicles seems obvious. 

So why stop there? Fields like medicine, public health, food safety, agriculture, cybersecurity, crime prevention, and military science can all benefit from AI technology’s increased efficiency and accuracy. Finding data security risks and protecting them before they become breaches, predicting crop failure before a harvest is ruined, diagnosing diseases faster and more accurately before patient lives are ruined— these are all areas where the numbers don’t lie, though perhaps they are more nuanced than a statistic like “fatalities per million miles.”

These examples are more in-your-face than freeing up time in software coding, but are they so much more important? AI can measurably improve the daily quality of our lives by automating any number of mundane tasks, increasing accessibility, and enhancing our security. Our moral obligation to adopt AI is as much about contributing to general human well-being as it is about preventing unnecessary deaths in traffic accidents. 

Crunching the Numbers

Even as individuals grapple with the moral quandaries surrounding AI, many corporations have made their decisions. For them, the ROI speaks for itself. 

Looking at Amazon, we can see that the significant shift towards automation has produced tangible and measurable increases in efficiency. If that is truly possible, then the questions of morality seem academic and nebulous. 

“Academic and nebulous” doesn’t mean unimportant, however. The economy depends on people having jobs, and some of those jobs will inevitably be replaced by AI. Businesses have to consider the human cost of their decisions as well as the potential for growth. The economic cost of adopting AI is as much dependent on the changing landscape of the job market as it is on streamlining operations. 

The shift will not be simple; organizations must keep employees’ welfare in mind as they adopt this technology. 

Designing Around the Hesitation
We have established why AI is worth adopting; we have also shown why people might not want to. However, the technology is barreling ahead, and no matter what side of the issue you fall on, you’re likely to get swept up in it.

With the choice of whether to adopt likely taken out of our hands by the obvious benefits, the question turns to how. How best can we bridge the gap between the people who want to integrate AI and those who are hesitant? The solution can be found in emerging design philosophies that keep the ethical and moral implications in mind while tailoring technology to what people actually need. Some options that attempt to address this include:

Questions Inevitably Remain

Though we have a moral imperative to adopt AI, the debate is not over. The benefits are clear and measurable, but the costs are also worth considering. Biases in AI training, the environmental impact of LLMs, and the inevitable changes to the job market are just a few of the considerations that must be made to take advantage of AI ethically. However, we can be morally excited about the opportunities available if we approach AI adoption optimistically and cautiously. 

Patience, consideration, strict frameworks and governance, and education are the keys to safely and responsibly utilizing AI’s enormous potential.

###

Cory bio

Cory Hymel is a futurist and VP of Research and Innovation at Crowdbotics. Crowdbotics transforms legacy applications through an AI-powered, requirements-driven approach. This expansive approach reduces development time and minimizes the risk of project failure, helping companies build faster and more securely.

Cory’s LinkedIn Cory’s X