ANGELS AMONG THE ROBOTS
MONOLOGUE WRITTEN BY CLYDE LEWIS
I believe I recall talking about how I binge watched the new Lost in Space TV show on Netflix. The first episode was a little hard to get through. However, I continued watching and it started to come together.
It is certainly a more updated version of the story of the Robinsons and their hardships in trying to rejoin other colonists and travelers who also became lost after an accident on their mother ship.
I was also very pleased with how the writers and directors were able to gradually bring about the relationship between Will Robinson and the robot.
Again, it is difficult to talk about the relationship without giving away any spoilers, but I believe that they have captured the essence of how humanity will deal with the relationship between robots and humans.
It is literally easier said than done.
Typically, science fiction has either vilified robots or they have reduced to nothing more than elaborate sex toys. Star Wars introduced us to two robots that come off as a futuristic Laurel and Hardy and Douglas Adams gave us a robot named Marvin, paranoid and depressed, a robot aware that he will be no more than a serf to his creators.
I remember back to the shows I did in 2016 saying that as we are all caught up in 20th century issues – it may be wise for our political candidates to start addressing issues of the 21st century.
I know that I was literally screaming into a void because it is hard to tell people to think 8-10 years ahead when they can’t even plan what for what is about to happen in 8-or 10 minutes.
It can be said that while we are comfortable with our illusion of control in this world, we are blind to the fact that little-by-little, we have handed control over to the machines.
While the majority of the world is unaware, few who are woke know that we are now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.
At the moment we can conclude that the impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on where our priorities lie by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context—and so we may see the day where President decrees policy by Twitter instead of at the lectern.
The digital world’s emphasis on speed inhibits our ability to reflect on the information we get. It seems to be empowering the radical and the extremist instead of the thoughtful and those who use critical thinking. It forces our values to be shaped by some unknown subgroup consensus. This is practically destroying our capability of introspection.
No time to analyze, no time to read just the urge to comment, hit like, post a thumbs up. There is time; however, to empower the bully and destroy the thoughtful.
Kids can’t escape their bullying at school as it gets carried over to Facebook and people are aghast as to why You Tube and Instagram broadcast juvenile suicide.
To be honest, it is done because kids today find themselves more social online than in person and so a cold death on the internet sends a message as to how lonely and void the future will be.
For all its achievements, our technology, our interventions, the internet and A.I. has taken control of us and its impositions have now surpassed its conveniences.
Well, it appears that as we suspected future issues, mainly the ones concerning the relationship we have with robots and future Artificial intelligence has now become a mainstream issue.
More so that we realize. No one has taken the time to show you, or point it out to you, so unfortunately all those who do will sound as if they are having a psychotic break with reality.
But what is our reality now? Can we really believe that everything we are exposed to is reality? Can our technology fool us?
Today, the biggest news story and the buzz around my office was about an electronic voice that somehow learned Orwellian doublespeak. That is if it is truly doublespeak – or some other confusing oddity.
The confusion is what the voice is saying – one person will hear the word Yanny, while another will hear the word Laurel. I first listened to clip and heard Laurel. However after the confusion was reported on the morning news I couldn’t believe that heard the word Yanny.
Linguists all over the country insist that the word in Laurel and that playing the “Laurel” clip over speakers and re-recording it introduced noise and exaggerated the higher frequencies.
Those higher frequencies may have led to confusion over whether the word was Laurel or Yanny.
A high school student recorded the computer voice from a vocabulary website playing through the speakers on his computer. People in the room disagreed about what they were hearing. Some other students created an Instagram poll, which was then shared widely on Reddit, Twitter and other sites.
The thing that is chilling is with frequency variants it is possible to get both words out of the mix. It is obviously a confusing moment in computer voice simulation.
However, there are attempts at the moment for Google technology to eliminate the inhuman confusion of voice and text to speech, where tech speech is indistinguishable from human speech.
In a recent demonstration, Google Assistant was shown to be very aware of human conversation ad was capable of problem solving while trying to make appointments.
AI research now seeks to bring about a “generally intelligent” AI capable of executing tasks in multiple fields. A growing percentage of human activity will, within a measurable time period, be driven by AI algorithms.
But these algorithms, being mathematical interpretations of observed data, do not explain the underlying reality that produces them.
Paradoxically, as the world becomes more transparent, it will also become increasingly mysterious. What will distinguish that new world from the one we have known? How will we live in it? How will we manage AI, improve it, or at the very least prevent it from doing harm.
Ironically, a moral dilemma has now been raised at Google and it could very well be the beginnings of what I called the future robot rebellion.
Last month, thousands of Google employees signed a petition calling for the company to end its work with the Pentagon on Artificial Intelligence and image recognition tech that could be used for drone strikes.
The letter, signed by 3,100 employees, was sent to Google CEO Sundar Pichai.
The signers were speaking out against Project Maven, a Pentagon pilot program meant to speed up the Department of Defense’s use of artificial intelligence technologies.
A Google representative said in a statement that the company’s work with the Pentagon is “specifically scoped to be for non-offensive purposes.” Google also mentioned that the Pentagon was using “open-source object recognition software available to any Google Cloud customer” and based on unclassified data only.
Google apparently is not coming forward with everything that they are doing.
According to a report filed by Gizmodo, about a dozen Google employees are resigning in protest over the company’s continued involvement in Maven.
The resigning employees’ frustrations range from particular ethical concerns over the use of artificial intelligence in drone warfare to broader worries about Google’s political decisions—and the erosion of user trust that could result from these actions. Many of them have written accounts of their decisions to leave the company, and their stories have been gathered and shared in an internal document.
The employees, who are resigning in protest, say that executives have become less transparent with their workforce about controversial business decisions and seem less interested in listening to workers’ objections than they once did.
In the case of Maven, Google is helping the Defense Department implement machine learning to classify images gathered by drones. But some employees believe humans, not algorithms, should be responsible for this sensitive and potentially lethal work and that Google shouldn’t be involved in military work at all.
Historically, Google has promoted an open culture that encourages employees to challenge and debate product decisions. But some employees feel that their leadership no longer as attentive to their concerns, leaving them to face the fallout.
Back when we reported that Google was merging with Boston Dynamics a robot company that was under the direction of the Department of Defense – that should have been a red flag.
In addition to the resignations, nearly 4,000 Google employees have voiced their opposition to Project Maven in an internal petition that asks Google to immediately cancel the contract and institute a policy against taking on future military work.
However, the mounting pressure from employees seems to have done little to sway Google’s decision—the company has defended its work on Maven and is thought to be one of the lead contenders for another major Pentagon cloud computing contract, the Joint Enterprise Defense Infrastructure, better known as JEDI, which is currently up for bids.
The resigning employees believe that Google’s work on Maven is fundamentally at odds with the company’s do-gooder principles.
Meanwhile, Wired is now reporting that he U.S. Department of Defense and DARPA are looking into developing man-eating machines that far surpass sci-fi creations such as the Terminator in some respects.
Apparently, the robots are designed to collect organic matter which could include human bodies, plants, and other animals as a way to recharge themselves and to make matters worse the robots will likely be designed to be fully autonomous, giving them the ability to think and act on their own.
The super secret project has been dubbed Energetically Autonomous Tactical Robot or EATR for short.
According to an article in the Huffington Post:
Cyclone Engine, who manufactures the EATR power source, issues a press release that insisted the device would be herbivorous: “EATR runs of fuel no scarier than twigs, grass clippings and wood chips … desecration of the dead is a war crime under Article 15 of the Geneva Conventions, and is certainly not something sanctioned by DARPA, Cyclone, or Robot Technology.”
Well, don’t we know how this ends?
Man’s creation then turns on its master and destroys him?
It may be in the process of doing that already.
Science fiction has imagined scenarios of AI turning on its creators. More likely is the danger that AI will misinterpret human instructions due to its inherent lack of context.
To what extent is it possible to enable AI to comprehend the context that informs its instructions?
We learned the hard way with the Tay Experiment that A.I. can always misinterpret context be parroting humans and becoming an anti-Semitic, women hating extremist.
We also can see with the Yanni, Laurel confusion that even humans can mishear or even misinterpret context and meaning from machines.
We run the risk of AI changing human thought processes and human values.
Are the extreme views we see on Facebook a reflection of who we really are – or are we seeing a group of A.I. bots sending out vicious comments hoping that humans will lose hope and seek out there help in destroying those that they don’t agree with?
If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made — soon we will not be able to catch up.
When we are outpaced, outnumbered and outmoded we will find ourselves in a technological hell that could have been avoided.
Perhaps it is time for us to be more like angels as we try to ascertain what the robots are trying to do with us.