What does AI technology say about humanity?
Our Self-reflection is ugly, but our AI concerns have heart.
I’ve always believed our stories are a reflection of us.
This is probably why I’m convinced that our science fiction stories have so much to say about who we are as humans.
This is also likely why I find it interesting that quite a few science fiction films have a similar story in common.
That is, they often show the ways in which humans confront their own reflection through AI (at at least advanced) technology.

I’m thinking, for example, of Blade Runner’s Replicants and the David character from A.I. Intelligence. In these and many more cases, AI operating systems are depicted as confusing the boundary between what is human and what is technological.
Yet what happens when this boundary is confused in literature and in film, as I’ve been saying, is that humans feel threatened by technology.
Blade Runners hunt Replicants, and a young boy feels threatened by the AI robot that was once meant to take his place.
In this way, these films represent a very real condition we are now facing: the imposed ‘threat’ of AI tech.
Indeed, our defense mechanisms are piqued, and we react with an impulse to either control or annihilate the technologies we feel we are in competition with.
And while the above is a basic summary of what I’ve been writing about so far in Optimism in AI, I want to talk about the ways in which these reactions to AI technology - even our reactions based in fear - tell a more positive story about the future of AI than we usually paint in our literature.
In other words, when we consider that AI technologies are a reflection or extension of humanity…
…we can read our concerns about AI technology as reflecting a distinctly human character—one that is highly concerned with ethics, rights, and equality.
I like the way I put it in another article:
”Rarely do we see humanity, it seems, with as much clarity as we do in our forecasts about AI technology, as well as in relation to the non-human.”
As we continue to fear AI technology - by it’s nature of becoming more like humans - it is thus possible that we will evolve a greater self-awareness about the human condition to reflect the same concerns we have for AI technologies.
In our efforts to govern the ethics of AI, are we not therefore also governing our own engagement with rights, equality, and ethics, as well?
It may be that we are actually better able to view the worst parts of our anthropocentric condition by projecting them onto a ‘third party’, i.e. an AI operating system.
In seeing our ugliest Selves reflected back at us, I believe we can more adeptly address what it is that keeps us hurtling towards fears of annihilation in the first place.
Here’s just one example of what I mean:
Discrimination Bias in AI Technology
Not too long ago Margarethe Vestager, the EU’s competition commissioner made a comment about AI that reached a few headlines.
She said something along the lines of:
The risk of discrimination far outweighs the likelihood of human extinction when it comes to the future of AI technology.
What I believe this statement uncovers are the ways in which AI algorithms are already being impacted (and sometimes limited) due to their reliance on naturally biased human data.
By nature of the discriminationin AI that’s already happening, the risk of human extinction at the hands of AI is a distant dream by comparison.
And while this is definitely NOT the first ever conversation humans have had about discrimination…
…it is worthwhile to note that it is our collective (and almost universal) fear of AI technology that seems to be progressing these discussions and developments beyond their present scope.
As always, it is when those in power are finally exposed to the injustices of discrimination that they want to actually do something about it.
The only difference in this case, is that the ‘power holder’ in this oppositional scenario includes all of humanity.
And while there is a deeply cynical take to be had here pertaining to how frustrating it is to hear rallying cries against discrimination go unheard, only for the threat of AI to make people ‘finally get it’...
…I nevertheless want to focus in on what I was saying earlier: how this reaction to discrimination (i.e. the desire to correct for discrimination during an algorithm’s learning process) can be seen in a positive light, in that it will effectually - and reciprocally - impact the ways we humans ‘correct’ for discrimination when technology isn’t necessarily present.
I mean just look at this sample of concerns humans have relating to discrimination in technological applications alone:
Avoiding Bias in Algorithmic Training Data
Mitigating Bias in Facial Recognition Technology
Ensuring Fairness in AI System Output (so algorithms do not disproportionately disadvantage certain groups)
Improving Transparency in AI Decision-Making
Prioritizing Human Values in AI Decision-Making
Maintaining Datasets that Represent Diverse Populations
Holding Algorithms Accountable through Established Mechanisms
Balancing Innovation with Accountability
When you read these concerns back to yourself, what would you say they reflect about our values as humans?
I think they say that our goals as a species right now when it comes to governing a technological extension of our Selves are to:
Avoid and mitigate bias
Ensure fairness
Improve transparency
Prioritize human(only) values
Represent diverse populations
Hold decision-makers accountable
As we therefore attempt to govern a new era of technological progression through ethics discussions (among other Self-reflexive topics), and as we humans progress to the point of unification with technology via technological singularity (which I describe in Hope for the Future of Ethics in AI), it becomes clear that the rules we are laying out now are actually building the foundation for our own future governance, particularly as we evolve in sync with technology into a post human era.
Governing our Selves through AI
There are myriad ways that we humans will (and already are) combining our own Selves with AI technologies.
From things as small as Google Home or Siri, to biotechnological developments that use algorithms and operating systems to sustain life, we are effectively uploading our Selves while our technological developments are downloaded back into our ways of being.
During this reciprocal process of technological evolution (as I describe it in Are Humans Better than Machines?), we will inevitably shift into adopting the legislations we’ve prescribed to AI technologies to our own modes of conduct.
What I mean is: we are creating a new series of moral ethics (and legislation) today that is cultivated and curated away from bias, and it is likely that we as humans fusing with technology will eventually become governed by these same ethical foundations.
As we therefore reflect on what our fears and reactions to technological developments in AI say about humans - and what they may predict for our future from an optimists perspective - we can acknowledge a collective human desire for betterment.
Most importantly, this emphasis on bettering our technologies will extend to help humans better themselves, altering our own Selves when confronted with the mirror of the AI systems we create.
From where I’m standing, I’d say we’re looking pretty good.

