This Geek I Know

This Geek I Know

tech news YOU can use

The Growing Impact of Artificial Intelligence: Emerging AI Applications and Ethical Considerations, Part 3 (conclusion)

This is the last piece in our series on the impact of AI. The previous articles in the series can be found at:

Part 1

Part 2

Today we’re looking at the ethical considerations for using AI applications. Every new technology will produce unintended consequences along with those things that everyone wants to see. We also have to accommodate the fact that there will be some elements of new tech that some people like and others don’t. What I present here isn’t a condemnation or an approval – it’s just a list of things we need to be aware of when we’re implementing an AI application.

BIAS AND FAIRNESS ISSUES

Algorithms don’t think. They don’t reason. They take information and process it and spit out what they think we want. AI is based on Large Language Models, and that’s all they do – they process language. They build their database of things they “know” from the things they receive. Because of that, we may see discrimination in different forms creep into the output results, and most of it will be unintentional. As the algorithm processes its inputs, it will draw from what it has. If all of its information contains societal biases, the algorithm will process that information as its norm. Additionally, if the data sources from which it “learns” lacks diversity, the outcomes will be skewed in the direction of that monoversity. Not only will this create inaccurate results, but some marginalized groups will find that results are not relevant to them. Furthermore, during the design and development stages, it’s possible for unconscious (or conscious) biases to affect the decisions that the AI makes.

Developers of AI systems can minimize the possibility of unintentional discrimination by incorporating diverse datasets, representing a wide range of demographics. They can also engage interdisciplinary teams, which should include ethicists and social scientists, in the development process, to help provide more varied perspectives. As part of ongoing support for AI models, though, teams must implement regular audits and assessments that can identify bias and initiate necessary mitigations.

DATA PRIVACY AND SECURITY CONCERNS

As with any technology, organizations must guard against the misuse of users’ personal data. This must include preventing unauthorized access or sharing of users’ data, which can lead to identity theft and too many other privacy violations to list here. Users need to be aware that there are companies that may exploit personal data for their own profit, without consulting the users for consent. Also, we still often see a lack of transparency in what data is collected and how it is used, stored, protected, and disposed of.

Organizations should put in place robust encryption systems so that data is protected in transit and at rest. They also need to do the standard security maintenance operations of running system updates and installing security patches. In accordance with security best practices, organizations must also establish and maintain strict access controls and authentication processes so that only authorized personnel have access to any sensitive information.

IMPACT ON EMPLOYMENT

Generally, technology that replaces manual tasks does initially displace human workers and introduce a measure of economic inequality, even though these technologies almost always enable more jobs than they displace. Even so, our ethical considerations really need to include the fair transition and reskilling of those workers who will be affected, so that we can help them adjust and not fall behind in the evolving employment market. As organizations move toward more automated tasks, they should be transparent, and they should involve stakeholder input, so that any efficiency gains are balanced with the social consequences. It’s not just the individual workers, after all, who will be adversely affected when their jobs are eliminated, but entire communities.

When an organization undertakes to deploy AI, it should also make a serious effort to invest in continuous education and training programs to help their employees adapt to the new environments. It would also be helpful to initiate collaboration between governments, businesses, and education institutions, so that curricula and certification programs can be created that will align with future job market requirements. We really haven’t seen this done well since – well, ever. Ethical deployment of AI systems includes creating policies that will support career transition and offer financial assistance, or better yet, incentives, for workers who will seek to reskill or upskill.

ENVIRONMENTAL IMPACT OF AI

We really need to take care of the planet we live on, and I don’t believe that we should sacrifice all technology advances in the name of environmentalism. But the reality is that when we introduce something new, we have to watch and see what happens as a result. We do already know that AI systems, especially those that live in large-scale datacenters, suck up energy, and we know that energy production contributes to carbon emissions and environmental degradation. It’s not unreasonable to expect that as the technology develops, we should see more energy-efficient algorithms, as well as a leveraging of renewable energy sources. That’s not the end of it, though, because we have to consider the lifecycle of the hardware the stuff runs on – from production to disposal. We need to look at reusable and recyclable components, so that we can reduce the e-waste generated by AI hardware.

THERE ARE ALWAYS UNINTENDED CONSEQUENCES

As excited as I am about the great things that AI can help us accomplish, I’m also very much aware that humans are being replaced by systems that just don’t do as good a job; I’m aware that the increased datacenter processing use creates more heat that has to go somewhere and consumes more energy that has to come from somewhere. Some effects are easily mitigated. For example, if you’re an artist, your work will always be just plain better than anything an AI system can create. Sometimes excellence just isn’t required, and that’s where job replacement will occur. When “good enough” is good enough, AI is perfect. It’s true that we don’t always require excellence, but where excellence is what is needed, only a human can provide it. For those who are conditioned to accept mediocrity and call it excellence, I don’t have an answer. We’re already seeing some stores replacing automated checkouts with real cashiers, so they’ve discovered that automation for the sole sake of saving money isn’t always the right best solution. I tried letting AI do some of my writing for me, and I was not at all satisfied with the results. The poor quality completely destroyed the time savings, because I had to rewrite the whole thing. It does a great job of outlining, though.

I’d like to hear from you – what is your opinion of the AI prospects? Do the benefits outweigh the downsides, or vice versa? Something I think I’ll explore later is the idea of regulating AI, but that’s going to take a lot of research. Your turn – drop a comment below.