1/7/20: 2020 – THE YEAR OF THE METAL RATZ
MONOLOGUE WRITTEN BY CLYDE LEWIS
Many people who have listened to the show for many years know that I have a hobby of looking into the Chinese calendar to find out what sign of the Chinese Zodiac is connected to a particular year.
Well, my wife told me that it is the year of the Metal Rat.
According to the 2020 Chinese horoscope, the Lunar New Year starts on Saturday, January 25th and ends on February 11th, 2021. The Rat is the first sign from the 12 animals in the cycle of the Chinese Astrology.
The Metal Rat is usually a sign of new beginnings – a time to reboot and start over but of course, I have taken some liberties with the term “metal rat” and have decided for my own narrative needs to use it as a metaphor for the robot.
2020, we have been told, is going to be the year where all of the most amazing technology will be implemented for what the technocrats say is our own good.
The Consumer Electronics Show underway in Las Vegas and Samsung wheeled out its first artificial human called Neon.
Neon tweeted “Have you ever met an ‘Artificial?'” several times since its Twitter account launched in December. Its LinkedIn page says its “bringing science fiction to reality” and has “the mission to imagine and create a better future for all.”
Little was known about Neon, beyond the fact it’s run by Pranav Mistry, the Samsung research exec who in October was named CEO of Samsung’s Bay Area-based Technology and Advanced Research Labs.
Over the weekend Mistry tweeted out two photos of what appears to be an avatar (or “artificial human?”) that he called “CORE R3.’
“It can now autonomously create new expressions, new movements, new dialog (even in Hindi), completely different from the original captured data,” Mistry tweeted.
Looking at the Avatar you would think you are living in a lesser level of Westworld but technology is exponentially growing – and robots, drones, military metal rats and advanced Artificial Intelligence are certainly going to be huge in the next decade.
Some of us are still trying to get a handle on the technology we have now.
The other day I lost my cell phone. I knew exactly where I lost it too. I lost it at a bar. I realized that I lost the phone 10 minutes after I left. When I returned to the bar to ask if anyone had found it, the woman there told me that no one turned it in. I remembered where I had it last and there was only one other guy in the bar with me.
He was still there when I returned.
I asked him about it and he said that he did not see it. He told me that there was a woman in the back playing video poker and that perhaps she saw it.
I walked back and asked about my phone – she said “I don’t know where your phone is.” She seemed annoyed and I was suspecting that she thought I was being accusatory.
She had a big bag where she carried her stuff and I asked her if she would be kind enough to empty her bag. She told me “I am not a thief.” The request was fruitless.
I left feeling sick – all my information, all of my conveniences gone. I realized the phone was not lost it was stolen.
I suspected that the woman took it.
I thought who is stupid enough to steal a cell phone, especially when I can locate it with my Apple ID. The only problem was the Apple store was too far away and was closing so I would have to wait to ping the thing.
Many people are unaware that if you are caught stealing a cell phone – it is a felony you can spend 5 years in jail.
Luckily, the woman’s boyfriend found the phone and texted my wife saying that his girlfriend stole the phone and that he wanted to give it back.
And fortunately, a lot of my apps were locked with passwords.
I went a day without my cell phone and I was so worried about whether or not any information would be hacked or taken from me.
There was a sense of cold vulnerability that I felt with it gone and that fact in and of itself was troubling. I talked about it with one of my friends and he said that his phone unlocks with facial recognition and that he loves that capability.
I told him that I liked the fact that with my phone I don’t have to hail a cab, I can order food using an app – I don’t have to carry cash.
But I realized something that most people do not realize – with your cell phone you don’t have to learn. You don’t have to know directions to a location or memorize telephone numbers of the ones you love and associate with.
Technology can be a trap.
The same technology that makes your life convenient – is the same technology that can be either taken or used against you.
For example, that same facial recognition that unlocks your cell phone is the same technology they can feed into a drone that can target can kill you. Using object recognition software can identify a target and it can get more accurate the more that you use it. The more pictures you post to Instagram, the more you post to Facebook and Twitter creates a virtual portfolio. The more pictures you make public on social media the more these technologies chronicle your features.
I am sure that this technology was used to track down and kill Qasem Soleimani. It was also the same technology that the Obama administration used to hunt down and kill Anwar al-Awlaki, an American citizen. His son, Abdulrahman al-Awlaki another 16-year-old U.S. citizen, was killed in a U.S. drone strike two weeks later.
That drone, you know the one that you see advertised that gets great aerial photography – can be made to fly without a pilot carrying a warhead that can be thrust headfirst into a target.
New technology for 2020 will be certainly an amazing thing to behold but keep in mind that weaponized versions of this technology are being made to kill you. These technologies are now trending because of the latest incident in the Middle East and it appears that both sides will be using highly advanced Artificial Intelligence and robotics that will change the way wars will be waged.
Many of the companies that are behind this technology got a big boost after the attacks at the Bagdad airport which has triggered the threat of all out war.
Now it has been reported that at least six rockets have struck the Al-Asad airbase in western Iraq, which houses US troops, and there are unconfirmed reports of up to 30 missiles.
Iranian news agency FARS reported the launch of multiple missiles in “revenge” for last week’s drone strike that assassinated General Qassem Soleimani at the Baghdad international airport, along with several leaders of the Iraqi Shia militia.
Anonymous Pentagon officials claim the missiles came from Iranian soil, which would be a first since the Iran-Iraq war ended in 1988.
Meanwhile, as I stated on my show last night that it speaks to the state of American politics when for three years the continued defense of Donald Trump’s record has been: “well, he hasn’t started any new wars.”
I can’t say that now and that should be seen as a loud signal that this is a time of strange affairs.
If this escalates it may not end the way we want it to and I have seen an overly confident armchair warrior declaring that Iran will finally be turned into a parking lot.
We were talking about how we used to run a Ground Zero opener that was taken from a Family Guy episode where Peter Griffin says incredulously “ So you sayin we need to invade Iran?”
We played that opener for 6 years.
We actually pulled it from rotation because we wanted to change it for the new year and now, ironically, we are now in open warfare with Iran and while we think that this will end quickly, new advancements on both sides are something we should think about – in the past we have often laughed at the thought of a formidable foe in the Middle East – our advancements have made us almost invincible but technology has advanced to the point where even Iran has advanced robotic that can be used to destroy enemy targets, engage in cyber-attacks and possible uranium enrichment also dabbles a remote possibility that low-grade nuclear warheads or bombs can be used as retaliatory weapons.
Iran is no longer abiding by many of the restrictions in the landmark 2015 nuclear deal and this should be something to consider as their allies are sharing with them advanced weapons that can boost the capabilities of Iran.
Iran is closer to having a functional nuke now than it has been in the last five years.
Weapons contractors are no respecters of persons—they will sell weapons to anyone and the advanced technologies are freely being shared between Russia, China, and Iran.
Lockheed Martin, famous for its fighter planes, helicopters, and missiles, saw its stock price spike to over $416, a jump of seven percent almost overnight. Other defense corporations like Northrop Grumman saw a nine percent rise in its share price, as the stock market rushed to buy a piece of a highly profitable company. General Dynamics and Raytheon saw similar increases in their companies’ value. Meanwhile,
There are two distinct programs provided by Raytheon JADE I and JADE II.
Operational Analysis of “War Gaming” and the explanation of the JADE holographic simulations can be found in Raytheon Reference memos which state that The NATO Modeling and Simulation Group Technical Activity 48 (MSG-048) was chartered to evaluate a Command and Control (C2) Language, Coalition Battle Management Language, for Multinational and NATO C2 collaboration supported by modeling and simulation tools. To achieve this, MSG-048 is using an emerging open technical standard based on the US Joint Battle Management Language (JBML) prototype Web services.
This paper describes the Joint Air Defense Training Simulation (JADE) II, as an experimental synthetic exercise first performed in late October 2007. The exercise provided training for an Anti-Air Warfare (AAW) organization including two naval frigate AAW teams, an air surveillance and combat management team at a Control and Reporting Centre (CRC) and two combat aircraft pilots, training air-maritime cooperation and coordination procedures.
The synthetic exercise was enabled by interconnecting stand-alone training simulation systems and their voice and tactical data link systems, creating the JADE II Joint Tactical Training Capability Prototype (JJTTCP). The JADE II experiments evaluated the JJTTCP’s ability to provide relevant and cost-effective training.
Jade Helm 15 is an implementation of Artificial General Intelligence on the battlefield, literally “mastering the human domain” as reported in their cryptic slogan.
“JADE” is an A.I. quantum computing technology that produces holographic battlefield simulations and has the ability to use vast amounts of data being collected on the human domain to generate human terrain systems in geographic population-centric locations to identify and eliminate targets, insurgents, rebels or whatever labels that can be flagged as targets in a Global Information Grid for Network Centric Warfare environments.
The JADE II battle field system is cognitive and intuitive. It can examine prior executed battle plans and devise ‘new and better’ strategies to increase the ‘kill chain’. The II generation of JADE has the capability for two way communication with drones through the OCCOM technology which is one of the next generation integrations to this system.
Many people remember the JADE HELM exercise that took place in 2007. It was a cognitive software program based on a Network Centric Warfare Systems at the HELM.
These systems could very well see an evolutionary change as A.I. grows exponentially for what will be touted as proficient warfare systems. The end result would be a holographic battlefield technology where killer robots and drones would be put into the matrix and used for direct kills.
US military commanders are itching to get their hands on some killer robots after an Army war game saw a human-robot coalition repeatedly rout an all-human company three times its size. The technology used in the computer-simulated clashes doesn’t exist quite yet – the concept was only devised a few months ago – but it’s in the pipeline, and that should concern anyone who prefers peace to war.
Civilian casualties are already a huge problem with drone strikes, which by some estimates kill their intended target only 10 percent of the time. Drones, an early form of killer robot, offer minimal sensory input for the operator, making it difficult to distinguish combatants from non-combatants.
Soldiers controlling infantry-bots from afar will have even less visibility, being stuck to the ground, and their physical distance from the action means shooting first and asking questions later becomes an act no more significant than pulling the trigger in a first-person-shooter video game.
Any US military lives saved by using robot troops will thus be more than compensated for by a spike in civilian casualties on the other side. This will be ignored by the media, as “collateral damage” often is—but many Americans will not be informed about the dangers ahead as many of the technologies we use will be weaponized, used for advanced surveillance and biometric identification.
The Pentagon is in the Death Machine business and many militaries and tech contractors are selling advanced A.I. weapons and what can be called Metal Rats to the highest bidder.
Meanwhile, Iran’s killer robots with the power to take on tanks and infantry have been captured on video, as tensions escalate in the region. The Heidar-1 unmanned ground vehicles (UGV) were recently unveiled by the Islamic Republic of Iran Army Ground Forces. Iranian robotics experts in Tehran developed the deadly robots with the aim of rolling under tanks and blowing them up in a potential future ground conflict.
They were created by the Research and Self-Sufficiency Jihad Organization of the Iranian Army. Scientists created six different versions of the wheeled bots, including two which are mounted with assault rifles.
Another is designed to blow up underneath a tank like a super-powerful mobile mine.
All the vehicles are topped with antennae and small video cameras, while the assault-rifle equipped versions also have telescopic optic sensors to aim their weapons.
Describing them as “network-connected anti-infantry and armor smart UGVs,” Iran first unveiled the robots at an army weapons expo in Tehran last October.
This isn’t the first time uncrewed ground vehicles have been used in the Middle East.
ISIS and other paramilitary terror groups have built their own in the past from commercially-available parts.
The killer robots are similar to the Goliath tracked mines – known also as Doodlebugs – developed by the Germans during World War Two.
Meanwhile, attempts at creating Skynet have been successful on many fronts.
In the first Terminator films, Skynet is described as being a revolutionary artificial intelligence system built by Cyberdyne Systems for SAC-NORAD. It was described as an advanced neuro network for full-spectrum control. It utilized advanced A.I. that controlled robotic killing machines form both ground and air.
Starlink from Space X has launched strings of satellites that have been mistaken for UFO’s this past year.
Last night’s successful Starlink launch made Space X the world’s largest Satellite operator and became the first launch to happen under the auspices of Space Force, which was signed into existence on December 20th, 2019 by President Donald Trump as the sixth branch of the armed forces.
Back in October, we learned that Space X has been working with the U.S. Air Force on a program called Global Lightning.
The military is seeking ways to use what are called mega-constellations that bring cheap broadband capabilities all over the world.
Space X in December 2018 received a $28 million contract to test over the next three years in different ways in which the military might use Starlink broadband services. So far, SpaceX has demonstrated data throughput of 610 megabits per second in flight to the cockpit of a U.S. military C-12 twin-engine turboprop aircraft.
Space X received the largest Air Force contract so far from any of the LEO broadband companies under the so-called “Defense Experimentation Using the Commercial Space Internet” program.
Low-cost internet access from LEO constellations is one of the products that the Air Force wants to be able to acquire and use as soon as possible.
Today the military relies on a mix of geostationary commercial and military satellites. The megaconstellations would have hundreds or thousands of small satellites orbiting the planet at lower altitudes.
It would be an advantage for the Department of Defense to invest in military-owned satellites so that they control these metal rats – namely mechanized death machines from the air.
When it comes to topics like the internet of things the train has already left the station.
Digital sensory devices are now being procured and they are meant to interact with each other in a similar way to which people interact with each other on the web.
With 5G on the way, machines will be able to communicate with machines, robots can communicate with robots – and the military can efficiently communicate with their autonomous metal rat killing machines that put you constantly under the gun.
We have gone from telemetry to machine-to-machine, from analog to digital and soon, we will be making the transition from physical to cyber. Then there will be organic to metal machines.
The invention of metal rat fighting machines and their use will be a keen equilibrium for advanced adversaries like Russia and China—even in a lesser way, Iran.
The US has not picked a fight with a militarily-equal enemy in over half a century, but robots capable of increasing the might of the Pentagon’s military force nine-fold will seriously shift the balance of power in Washington’s favor.
The country’s top warmongers will be fighting amongst themselves over whom to strike first, reminding apprehensive citizens that it’s not as if their sons and daughters will be put at risk unless the US’ chosen target just happened to lob a bomb overseas in retaliation.
The chilling possibility that the avoidance of home-team casualties would open the door to all the wars the US has ever wanted to fight, but not had the military capacity for, runs up against the fact that Washington – already over $22 trillion in debt – can’t possibly afford to further expand its already monstrous military footprint. Yet the Pentagon never runs out of money – it added $21 billion to its budget this year, and has never been told there’s no funding for even the most ill-advised weapons program.
So while money might be a logical protection against the endless expansion of the endless war, in the US, the Fed will just print more.
Autonomous killer robots – bots that select and kill their own targets using artificial intelligence – are the logical endpoint of the mission to dehumanize war completely, filling the ranks with soldiers that won’t ask questions, won’t talk back, and won’t hesitate to shoot whoever they’re told to shoot.
The pitfalls to such a technology are obvious. If AI can’t be trained to distinguish between sarcasm and normal speech, or between a convicted felon and a congressman, how can it be trusted to reliably distinguish between civilian and soldier? Or even between friend and foe?
The first human soldier to die by a robot comrade’s “friendly fire” will no doubt be framed as an accident, but who will be held responsible in the event of a full-on robot mutiny? This would make the Matrix and the Terminator stories prophetic accidents.
The metal rat-minded military is aware of the bad publicity. A recent Pentagon study (focusing on robotically-enhanced humans but applicable to robotic soldiers as well) warns that it’s better to “anticipate” and “prepare” for the impact of these technologies by crafting a regulatory framework in advance than to hastily impose one later, presumably in reaction to some catastrophic robot mishap.
In order to have their hands free to develop the technology as they see fit, military leaders should make an effort to “reverse the negative cultural narratives of enhancement technologies,” lest the dystopian narratives civilians carry in their heads spoil all the fun.
Meanwhile, the Campaign to Stop Killer Robots, a coalition of anti-war groups, scientists, academics, and politicians who’d rather not take a ‘wait and see’ approach to a technology that could destroy the human race, are calling on the United Nations to adopt an international ban on autonomous killing machines.
According to the Campaign to Stop Killer Robots, the stage is being set for a potentially destabilizing “robotic arms race” that could see countries worldwide working to gain the upper-hand in building their autonomous warfighting capabilities.
The militaries of the U.S., Russia, China, Israel, South Korea, and the United Kingdom have already developed advanced systems that enjoy significant autonomy in their ability to select and attack targets, the campaign notes.
And while countries across the Global South have urged the UN to impose a ban on killer robots, states who possess these technologies have opposed such a ban at every turn — signaling that they are unwilling to let go of their revolutionary new implements of death.
Former Google representative Laura Nolan has said that she had joined the Campaign to Stop Killer Robots because the robot systems envisioned by Big Tech firms and militaries could potentially do “calamitous things that they were not originally programmed for.”
Nolan resigned last year in protest of Google’s Project Maven, which was meant to dramatically upgrade U.S. military drones’ AI capabilities.
“The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once. What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed.
There could be large-scale accidents because these things will start to behave in unexpected ways. This is why any advanced weapons systems should be subject to meaningful human control, otherwise, they have to be banned because they are far too unpredictable and dangerous.”
According to a supplemental report to President Trump’s budget request, the federal government is poised to spend nearly $1 billion on nondefense AI research and development in the fiscal year 2020.
Meanwhile, the military is searching for what they call a dead hand technology that will launch nukes if there is no one to launch a defensive attack.
Their solution? To fit the U.S. nuclear arsenal with artificial intelligence controls. This, of course, echoes some of the doomsday machine worries that were presented in Dr. Strangelove.
And while the Cold War may long be a thing of the past, experts claim that such a system “that might seem unfathomable” remains necessary “to reinforce the desired deterrent effect” of the U.S. war machine in the face of rivals’ nuclear modernization programs while giving a boost to second-strike capabilities.
The potential application of Artificial Intelligence tech to nuclear warfare is hardly a fresh concept. The United Nations University explains that during the height of the late 20th-century nuclear rivalry between the U.S. and the former USSR, both countries’ respective military leaderships explored the option of bolstering their nuclear capabilities with varying degrees of AI.
However, the Soviet Union was the only country to develop a semi-automated nuclear launch mechanism to ensure an all-out nuclear response in case their leadership was decapitated in the form of the Dead Hand system.
In the science fiction movie, Prometheus, the fictitious Weyland Corporation was able to create a viable robot for the military in 2023.
“We are now three months into the year of our Lord, 2023,” intones the corporate titan Peter Wayland after listing the key inventions that arrived subsequent to Promethean fire, “at this moment in our civilization, we can create cybernetic individuals who, in just a few short years, will be completely indistinguishable from us.”
He sells Artificial Intelligence and cybernetic artificial humans like a televangelist sells payer calls and bibles.
Later, we are then introduced to David, a robot that is made especially for space travel. We later learn that robots that are program to do the bidding of a corrupt system become monsters –and it should be a precautionary tale for the world.
Weyland evokes the same hubris that had motivated the fictional Dr. Frankenstein to create his monster. Mary Shelley, however, in her novel Frankenstein, or the modern Prometheus had criticized such hubris and instead affirmed a reverent belief in the natural world and its limitations.
Likewise, director James Whale in his classic Hollywood films, Frankenstein and Bride of Frankenstein, had criticized the same hubris by depicting the “monster” as sympathetic, notwithstanding endings in which the “monster” is destroyed.
The question is, who will win in the struggle between what is human and what is a machine and will machines become monsters if we give them the keys to our weapons?
When real-world bright minds like Elon Musk and Stephen Hawking raise the alarm that AI can spell humanity’s doom, perhaps we should listen.