The Ethics of Nature found in AI

The Ethics of Nature found in AI
5 mn read
For years there has been a conversation around Artificial Intelligence becoming so strong and so universally informed that it can become a threat to the continuity of humanity. As artificial intelligence continues to grow in production use today, we are uncovering different aspects where there is a lot of risk.

I’m Dominick Romano, an engineer with a background in Video Game Development, Digital Marketing, and Enterprise Software Development. I also have experience deploying transactional software for banks and real-time routing of hazardous payloads. I spent the last 3 years inventing what the platform should be to harness, control, and power the multi-domain AI of the future and reduce associated risks. An Operating System for Artificial Intelligence called drainpipe.io is currently in private beta. Contributing to AI for Radiology at World Health Organization and International Telecommunications Union, there are a few things about the ethics of nature found in AI that I am here to tell you about today.

What is Ethical AI?

To discuss ethics in AI or “Ethical AI” we have to dive into the deeper meaning of what Artificial Intelligence is. Which includes the numbers representing probabilities of an array of associated outcomes. Algorithmically train those probabilities, much like teaching a dog to sit patiently and repeatedly while rewarding the desired result. See a piece of information, associate it with this piece of information, do so correctly, and get a reward. As of today, we teach and train AI as it relates to our understanding of it, and our brains. But humans have faults, and so does the data humans produce, as with the software we produce. It is part of nature. It’s also part of nature that some humans find those faults in software and exploits them.

We will address potential ethical imbalances in AI and how to address these issues.

The Ethics of Nature found in AI

AI Requires Data

Data always has a source. Even if we haven’t discovered it yet. If we look at the population of citizens in the United States (thanks to wolfram alpha) we can see the disparity in just total populations. Imagine this, if we were to take everyday general decision-making from the United States and push it into an algorithm and train a machine to make decisions like the US, would the resulting cyborg make decisions as a black man or white man? If it only knows of decisions would it know of race? If it knows of race can it detect race-based decisions?

We have to step out of the emotions of whats happening in the country and world today, and into a world where information is genetic.

We continually transform our Information on social media platforms closer to genetic representations of us. States like Texas filed lawsuits alleging AI data harvesting. Alleging large platforms are illegally harvesting image and audio data to train AI.

Where the kid’s game of telephone is a story of our brief time here before our stories are retold for the generations after us in some undefined variance of their original form. With enough data, some AI algorithms, can infer and predict contextual structure quite well. This is an area where Stochastic Gradient Descent optimization excels. So what if we took this example we are working on with the population in the United States, and we balanced the data input? How?

This would require generating decisions, or developing artificial ethics. Once we get here we get into some hairy territory over-representation, generation, and what is ethical. In our example, let’s say it’s possible to take a single person’s day’s worth of decision-making and simulate the decisions they would make. But it wouldn’t have much variance and would be subject to low confidence on day two. After that, let’s then talk for a second about what Black Swans, the power of math, physics, and Monte Carlo simulations all have in common with this problem.

The Black Swan

A term referring to the occurrence and impact of the highly improbable, the 2020 COVID-19 Pandemic caused what economics refers to as a Black Swan event. Undertrained AI is no different than being a Human on earth between 2019 and 2020 in a sense. Going back to our example if we take one person’s days’ worth of decision-making and try to predict their decisions on day 2, we won’t have confidence due to the lack/imbalance of data. The limited associated historical probability data makes everything improbable. This is where math, physics, and Monte Carlo comes in.

The Ethics of Nature found in AI
https://seekingalpha.com/article/4121136-random-walk-model-suggests-investment-in-eli-lilly-isnt-random

Behold, Monte Carlo. This is 250 Monte Carlo Simulations of stock prices by simulated days. Monte Carlo simulations like this can create data that when reverse tested can produce novel findings in science when the data is scaled to massive amounts and compressed.

So, what if we used AI or simple timed checks to evaluate for the disparity in populations? Then use Monte Carlo Simulations to balance the population’s information output through simulation. The result might be a dataset that could be used to essentially train away the imbalance through variance and confidence. Cognitive Bias or Prejudice at their very core is a result of a lack of information. Or an imbalanced reference set.

How do humans, nature, and AI intersect?

Researchers are conducting important studies to explore how AI can be used to save lives. They can process medical imaging to identify trends in medical records. This has uncovered underlying diseases, which can help them accomplish this goal.

As medical data is sampled from Cameroon, and trained into a model. The resulting model is useless on patients in Silicon Valley.

Why? The ecology of viruses and diseases is different from Cameroon to Silicon Valley. The underlying intrinsic embedded into the data result in false positives and false negatives accordingly.

The human mind in nature is also very much susceptible to cognitive biases of which we can be unaware. Just like an AI Model. There has been ongoing research using Deep Learning methods. Those methods have been proving there are dependencies on a patient’s appearance and race. The dependencies have shown that the cognitive biases of medical practitioners in the supply chain of sample collections and medical imaging can dramatically impact the accuracy of predictive analytics models based on prejudicial factors. The slight difference in handling causes this to happen intrinsically. Even the slightest variance when analyzed across large sets with Deep Learning methods uncovers trends. The practitioner can be unaware it’s happening, but it is. Because we inhibit the same biases we train into AI/ML models through the data we generate and collect.

An organization or company of a certain size should be expected to carry the torch as a beaconing light for innovation. The greater good should be a requirement of all American Business Institutions.

We need an infrastructure where innovation can thrive, and there are checks and balances in the storage, and usage of Personally Identifiable Information. Especially as this data becomes increasingly closer to synthetic genetic data via reconstructed avatars and pattern of life analysis.

Recommend0 recommendationsPublished in Technology & Innovation

Related Articles

Responses