Podcast: Play in new window | Download ()
Subscribe: Android | RSS | More
The Trump administration has authorized the Department of Defense to begin testing advanced military technology to fight “fake news” and to thwart “large-scale, automated disinformation attacks,” according to an article published by Bloomberg.
Tasked with taking down the “disinformation” is the Pentagon’s super-secret research arm, Defense Advanced Research Projects Agency (DARPA). The Bloomberg story reports that DARPA “wants custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips. If successful, the system after four years of trials may expand to detect malicious intent and prevent viral fake news from polarizing society.”
As is always the case when the federal Leviathan looks to enlarge the scope of its authority, there are academics all too willing to lend their credentials to the federal government’s excuse for consolidating control over all aspects of human existence, even speech.
Even in 1651 there was no shortage of scholars siding with government to gag the public. In an essay written by John Milton, the celebrated friend of liberty called out the professors acting as mouthpieces for those who would deny people of all their rights.
“Nature and laws would be in an ill case, if slavery should find what to say for itself, and liberty be mute: and if tyrants should find men to plead for them, and they that can master and vanquish tyrants, should not be able to find advocates,” Milton wrote.
DARPA has a long history of helping the federal government keep an eye on anything it considers potentially problematic.
In 2012, Forbes reported that DARPA contracted with scientists at Carnegie Mellon University to develop “an artificial intelligence system that can watch and predict what a person will ‘likely’ do in the future using specially programmed software designed to analyze various real-time video surveillance feeds. The system can automatically identify and notify officials if it recognized that an action is not permitted, detecting what is described as anomalous behaviors.”
{modulepos inner_text_ad}
The Pentagon planned to deploy the devices at “airports and bus stations,” but there is little doubt that these predictive monitors have been installed right next to the red light cameras already mounted at nearly every intersection in America.
In the same story, Forbes also reported that “Carnegie Mellon is one of 15 research teams and commercial integrators that is participating in a five-year program, started in 2010, to develop smart video software.”
Now, take all that technology and turn it toward taking down any social-media post (including photos), online news story, or podcast episode that doesn’t tell a version of the facts that squares with the story the federal government wants the public to believe.
This is no longer a pessimistic prediction of a dystopian future. This is our situation today.
Of course, the question no one seems to be asking in all the stories covering the plan to put the Pentagon in charge of what gets published and what gets flagged as “fake news,” is just who will be programming the technology that makes that decision.
Is there any president — regardless of party — whom you would trust to be the gatekeeper of the stories seen by the people? While you may answer in the negative, there is an impressive percentage of Americans who would gladly give the current president just that power.
Research conducted by Ipsos revealed that 43 percent of self-identified Republicans believe that “the president should have the authority to close news outlets engaged in bad behavior.” A shocking 26 percent of respondents overall agree with that statement, as well.
What is nearly unbelievable and unacceptable, however, is that more than four out of every 10 members of the Republican Party who responded to this survey felt that the best way to push back against the press is to give the president the power to shut down those outlets he considers bad actors.
Twenty-three percent of Republicans and 13 percent of Americans overall agreed that “President Trump should close down mainstream news outlets such as CNN, the Washington Post, and the New York Times.”
It seems substantial numbers of our fellow citizens would applaud the decision to put the Pentagon in charge of choosing what we get to read, see, and hear and what gets removed by the censors’ scissors.
That isn’t to say that there isn’t genuine concern that the things we see are artificial, images created by computer that are so life-like that there are few who could tell a human from a convincing computer facsimile. As reported in the Bloomberg piece:
False news stories and so-called deepfakes are increasingly sophisticated and making it more difficult for data-driven software to spot. AI imagery has advanced in recent years and is now used by Hollywood, the fashion industry and facial recognition systems. Researchers have shown that these generative adversarial networks — or GANs — can be used to create fake videos.
Famously, Oscar-winning filmmaker Jordan Peele created a fake video of former President Barack Obama talking about the Black Panthers, Ben Carson, and making an alleged slur against Trump, to highlight the risk of trusting material online.
Andrew Grotto, from the Center for International Security at Stanford University, told Bloomberg that the quality of the computer-created “deepfakes” is so convincing that there’s a danger that people would be unable to know what is proper news and what is propaganda.
“Where things get especially scary is the prospect of malicious actors combining different forms of fake content into a seamless platform,” Grotto said, as quoted in the Bloomberg story. “Researchers can already produce convincing fake videos, generate persuasively realistic text, and deploy chatbots to interact with people. Imagine the potential persuasive impact on vulnerable people that integrating these technologies could have: an interactive deepfake of an influential person engaged in AI-directed propaganda on a bot-to-person basis.”
That level of technological precision is undoubtedly disturbing. What is more disturbing, though, and Grotto doesn’t mention it, is who gets to decide what is propaganda and what is news. Who is to distinguish “deepfake” from damaging truth?
Not to worry, though, the folks at DARPA promise to be very reliable referees of truth versus fiction.
“A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies,” DARPA writes in the paper promoting its Semantic Forensics program.
See? Nothing to worry about! The military will make sure that only well-worded, carefully crafted stories get through the federal filters.
Inconsistency detectors, eh? Would that include flagging any of the thousands of unconstitutional inconsistencies that are enacted daily by Congress or the corps of Executive Branch bureaucrats? Could we count on DARPA to deep-six the “deepfake” of despotism sold to the public as necessary to protect their safety?
Finally, consider carefully the warning given in 1815 by French philosopher Benjamin Constant, concerning what happens when a people place such immense power over the press in the hands of government:
By authorizing the government to deal ruthlessly with whatever opinions there may be, you are giving it the right to interpret thought, to make inductions, in a nutshell to reason and to put its reasoning in the place of the facts which ought to be the sole basis for government counteraction.
This is to establish despotism with a free hand. Which opinion cannot draw down a punishment on its author? You give the government a free hand for evildoing, provided that it is careful to engage in evil thinking. You will never escape from this circle.
The men to whom you entrust the right to judge opinions are quite as susceptible as others to being misled or corrupted, and the arbitrary power which you will have invested in them can be used against the most necessary truths as well as the most fatal errors.
Image: icholakov via iStock / Getty Images Plus