After all of the discussion last night about the fear and anger which has been produced because of the election results, there was actually a silver lining that broke through the parapolitical discussion I felt was important.
A caller began talking about how his biggest fear is losing his job due to technological innovations with Artificial Intelligence. It was couched in Trump wanting to bring manufacturing back to America. He was struggling with his promise because he felt that creating jobs would be difficult because of a future of automation and innovations in Artificial Intelligence.
As you well know, I brought up last year and continued into the election year the idea that the next president will have to address the issues of Artificial Intelligence and robots taking over basic jobs and putting people out of work.
In recent years, dozens of tech and science luminaries have shared their apprehension of AI running amok with super intelligent robots establishing a new world in which humans are at best, irrelevant and at worst, extinct.
I believe that my show is probably the first to address the issue of a robot rebellion.
Science has been generating fearful scenarios; they quite frankly are not that much different from the ones science fiction writers have conjured decades ago. We have seen in movies and TV shows, themes that arguably are creating a revelation of the method with regard to how our humanity will gradually accept a “replicant future.”
The HBO series, Westworld, is definitely illustrating a matter of fact desensitizing the matrix where humans and cybernetic equals cannot be easily detected and that some robots are actually requesting upgrades in their memory to have something called “introspective self-consciousness” or what is called in the series “bulk apperception.”
I was actually thinking that “bulk apperception” is a techno-speak for upgrading a cybernetic creation with a soul.
More specifically though, it means the process of understanding something in terms of previous experience. Teaching a robot or facilitating the method by which a robot can learn from its previous history and use it to gain a conscience.
Giving a robot apperception would open up a machine to connecting the dots through ideas learned, the thinking of general and necessary truths, including religious thoughts and beliefs.
It would also provide a blue print for reflexive comprehension of human-like inner processes, core beliefs, empathy, love, hate, and indifference. It would give a machine an ego called forth out of its digital soul.
This is mind boggling, right?
But apparently it is not impossible, and it is one such theory applied to very uncanny valley topics within the periphery of Transhumanism.
Like it or not, the idea of giving Artificial Intelligence apperception is most certainly a building block and an extensive development of a soul. It makes possible a robot’s use of higher activities of the mind, thinking and cognition.
This possibility makes us question if our own self consciousness is in reality, our soul. This also creates a dialogue about how AI can develop a soul through programming and bulk apperception.
Metaphysically, apperception is “the mind’s perception of itself as a conscious agent; self-consciousness,” or self-awareness. Apperception is rooted in the principle of nonresistance, a soul virtue synthesized of all indrawn (sublimated) physical senses, mental faculties and soul faculties. Apperception is also a law of being, and a law of doing.
We and our machines are on the cusp of a new relationship. In the not-so-distant future, we will begin entrusting to robotic systems that are highly or completely autonomous such vital tasks as driving a car, performing surgery, and choosing when to apply lethal force in a war zone. For the first time, machines programmed, but not directly controlled, by us will be making life-or-death decisions in complicated, fluid, and unstructured environments. Undoubtedly, mistakes will be made and people will die.
However, science has been working on ways to save individual apperception after death and transfer it to computer systems and robot systems.
Humai is a technology based company set up in Los Angeles. The project it is working on is known as “Atom & Eve.” Atom and Eve would let human consciousness be transferred to an artificial body after their death Humai is a relatively small company with just five members but with a larger goal to achieve. Two of them are researchers; one is the ambassador and an AI expert.
The Artificial Intelligence company believes that it can resurrect human beings within the next 30 years. The “conversational styles, [behavioral] patterns, thought processes and information about how your body functions from the inside-out” would be stored on a silicon chip through AI and nanotechnology.
Humai researchers are relying upon three technologies to achieve their goal: bionics, nanotechnology and Artificial Intelligence.
This technology, when perfected, will literally make death optional.
What we have to consider here are several different experiments linking biology and technology together in a cybernetic way; ultimately, combining humans and machines in a relatively permanent merger.
Soul catching and uploading to a machine sounds like a lofty goal and for some it may seem to be one more taboo operation that is akin to playing God.
When we typically first think of a robot, we regard it simply as a machine. We tend to think it might be operated remotely by a human, or that it may be controlled by a simple computer program.
But what if the robot has a biological brain made up of brain cells, possibly even human neurons? Neurons grown under laboratory conditions on an array of non-invasive electrodes provide an attractive alternative with which to realize a new form of robot controller. In the near future, we will see thinking robots with brains not very dissimilar to those of humans.
That development will raise many social and ethical questions. For example, if the robot brain has roughly the same number of human neurons as a typical human brain, then could it, or should it, have rights similar to those of a person? Also, if such robots have far more human neurons than in a typical human brain—for example, a million times more neurons, would they, rather than humans, make all future decisions?
Many human brain–computer interfaces are used for therapeutic purposes to overcome medical or neurological problems, with one example being the deep brain stimulation or DBS electrodes used to relieve the symptoms of Parkinson’s Disease.
However, even in this case it’s possible to consider using such technology in ways that would give people abilities that humans don’t normally have —in other words, human enhancement or upgrades.
Those who have undergone amputations or suffered spinal injuries due to accidents may be able to regain control of their limbs with their still-functioning neural signals.
Between 255,000-600,000 Americans can’t walk because of paraplegia, or leg paralysis usually linked to spinal damage. When the spinal cord is injured the signal from the brain to these neural networks gets blocked. This will change as science is devolving brain-spine interface, implantable chip. Tomislav Milekovic, a researcher at the Swiss Federal Institute of Technology in Lausanne, Switzerland, and his colleagues have managed to get paralyzed monkeys to walk with a chip implanted computer interface.
Chip implant technology has also given stroke patients limited control of their surroundings, it is also working for those who have motor neuron disease. With those cases, the situation isn’t straightforward, as patients receive abilities that normal humans don’t have, for example, the ability to move a cursor on a computer screen using nothing but neural signals.
It’s clear that connecting a human brain with a computer network via an implant could, in the long term, open up the distinct advantages of machine intelligence, communication, and sensing abilities to the individual receiving the implant.
The question is whether or not you would make the decision to take advantage of this technology to make disability and even death optional – or even accept and use the technology to wipe out death and disability entirely. This technology is estimated to be available in 5 years.
Of course, there will be philosophical challenges, and there are also religious philosophies that may prevent people from not wanting to utilize these opportunities, thinking it is immoral to do so.
Our own perceptions of what humanity is will be changed. There will be challenges that will span technical, regulatory, and even philosophical realms.
Predicting the future pace of AI parity is difficult, as is being certain about whether every researcher in every part of the world will take a responsible approach and therein lays the threat.
As technology rapidly progresses, some proponents of Artificial Intelligence believe that it will help solve complex social challenges and offer immortality via virtual humans. But AI’s critics say that we should proceed with caution. Its rewards may be over-promised and the pursuit of super intelligence and autonomous machines may result in unintended consequences.
While we may be interacting with AI systems more frequently than we realize, a new study from Time Magazine suggests that Americans don’t believe the AI revolution is quite here yet, with 54 percent claiming to have never interacted with an AI system.
However, I don’t believe that people understand that the primitive forms of sympathetic AI like Siri and Alexa’s Echo are already high tech guardian angels. The Alexa-enabled Amazon Echo can answer questions, play music, control smart devices and fulfill numerous other suburban needs. It is important though to ask if given the ability to think, would Alexa open the pod bay doors in an emergency?
Twenty six percent of Americans say that they would not trust AI with any personal or professional tasks.
Sure, sending a text message or making a phone call is fine, but 51 percent said they’d be uncomfortable sharing personal data with an AI system. Moreover, 23% of Americans who say they have interacted with an AI reported being dissatisfied with the experience.
I thought it was interesting that 66% of the respondents said they’d be uncomfortable sharing financial data with an AI, while 53% said they’d be uncomfortable sharing professional data.
Artificial General Intelligence or Super Intelligence is most certainly coming, but to pinpoint the time where human machine parity is assured is not as easy as just saying sooner or later. We know that robots doing odd jobs and even replacing workers will happen in 5 years. There’s a 50% chance that computers could reach human-level intelligence within 15-20 years, and looking further ahead, there’s a 90% chance of machine-human parity within 30 years.
I hope I can live long enough to have the option of my consciousness being fed into an avatar and live forever with all of my bulk apperception.